How economic indicators affect financial markets -Sandeep Kanao
Traders are always trying to understand the factors that cause the market to rise and fall. The truth is that there are a multitude of factors, and millions of investors make decisions that impact the market every day. Corporate earnings and news, political news, and general market sentiment can all move the market.
Which economic indicators affect financial markets and how they affect forex markets - Sandeep Kanao
A/ INFLATION
Inflation is a significant indicator for securities markets because it determines how much of the real value of an investment is being lost, and the rate of return you need to compensate for that erosion. For example, if inflation is at 3% this year, and your investment also increases by 3%, in real terms you have just managed to stay even.
If the rate of inflation increases, then the disposable income people have to buy things with is reduced more quickly. This can have a negative effect on an economy and hence the currency.
However, if a country experiences deflation, i.e. prices actually fall, investors could also see this as an indicator that the economy is performing poorly. Therefore, this can also have a negative effect on the value of a currency.
A central bank will therefore try to target an acceptable level of inflation – for example, an inflation level between 2–3%.
If the inflation rate is reported to be within the target range, the currency value does not tend to react very much. The currency value reacts much more if the inflation rate is drastically outside this range.
B/ GROSS DOMESTIC PRODUCT (GDP)
GDP tells you how fast the economy is growing (or contracting). GDP is the dollar value of all goods and services produced by a given country during a certain period.
Any significant change in the GDP, either up or down, can have a big effect on investing sentiment.
If the GDP growth rate is high, then the economy is considered to be robust and the currency will likely appreciate in value. If the GDP growth rate slows, then this can be seen as a weakening economy and the currency is likely to depreciate.
C/ PER CAPITA GDP
A measure of the total output of a country that takes the gross domestic product (GDP) and divides it by the number of people in the country. The per capita GDP is especially useful when comparing one country to another because it shows the relative performance of the countries. A rise in per capita GDP signals growth in the economy and tends to translate as an increase in productivity
D/ GDP - PPP (Purchasing power parity)
Purchasing power parity exchange rate is the exchange rate based on the purchasing power parity (PPP) of a currency relative to a selected standard (usually the United States dollar). This is a comparative (and theoretical) exchange rate, the only way to directly realize this rate is to sell an entire CPI basket in one country, convert the cash at the currency market rate & then rebuy that same basket of goods in the other country (with the converted cash).
E/ THE LABOR MARKET
Another major factor influencing the economy is the labor market. The key indicators most investors focus on here are total employment and the unemployment rate.
Low unemployment rates mean a strong economy, which increases the demand for the currency.
F/ ECONOMIC GROWTH OUTLOOK
Government agencies, as well as investment banks and economic think tanks, publish growth outlooks – an estimate of what they think the future GDP will be.
Growth outlooks give investors and traders guidance by providing an estimate of the future GDP. If growth outlook is reduced, the currency will fall; if growth outlook is raised, the value of the currency will appreciate.
G/ RETAIL SALES
Consumer spending can account for a majority of an economy. If it does not account for the majority, it still generally makes up a substantial proportion of it, and so retail sales data is an important indicator. Retail sales measure the total amount of consumer spending in a given month across various sectors, such as electronic retailers, restaurants and car dealerships, to name a few.
H/ HOME SALES
The housing market is one of the most visible signs of strong growth in the economy. Home sales are measured by:
•New home sales
•Pending home sales
•Housing starts
•Building permits
Strong retail sales means consumers are confident in the economy and have more money to spend, therefore having a positive effect on the currency.
Home sales rise and fall based on consumer confidence, mortgage rates and the general strength of the economy. A strong housing sector is therefore positive for the currency.
I/ TRADE BALANCE - Sandeep Kanao
The trade balance report gives details on the amount of imports and exports for a country over a given period. A trade deficit is negative for the value of a currency, while a trade surplus is positive for a currency.
The biggest influencers of market movements are, of course, the announcements and policies made by a country’s central bank and the important monetary authorities about
interest rates
Raising the interest rate curbs inflation and lowering the interest rate promotes economic growth.
High interest attract capital and so higher interest rates increase demand for a currency and the value rises.
Government spending is referred to as fiscal policy. It is usually a prominent way of stimulating the economy and can be a potent tool when looking to deal with a recession. If a country has a loose fiscal policy, this can cause the value of the currency to rise.
Introduction to Capital Market: Financial Risk Management - Sandeep Kanao
Tuesday, 12 November 2013
Wednesday, 28 August 2013
Sensitivity analysis in risk management -in Capital Market - Sandeep Kanao
Sensitivity analysis in risk management -in Capital Market - Sandeep Kanao
What is Monte Carlo Simulation and How Monte Carlo simulation works - Sandeep Kanao
What is Monte Carlo simulation? - Sandeep Kanao
Monte Carlo simulation is a computerized mathematical technique that helps to account for risk in quantitative analysis and decision making. The technique is widely used by in finance.
Monte Carlo simulation furnishes the decision-maker with a range of possible outcomes and the probabilities they will occur for any choice of action. It shows the extreme possibilities—the outcomes of going for broke and for the most conservative decision—along with all possible consequences for middle-of-the-road decisions.
How Monte Carlo simulation works - Sandeep Kanao
Monte Carlo simulation performs risk analysis by building models of possible results by substituting a range of values—a probability distribution—for any factor that has inherent uncertainty. It then calculates results over and over, each time using a different set of random values from the probability functions. Depending upon the number of uncertainties and the ranges specified for them, a Monte Carlo simulation could involve thousands or tens of thousands of recalculations before it is complete. Monte Carlo simulation produces distributions of possible outcome values.
By using probability distributions, variables can have different probabilities of different outcomes occurring. Probability distributions are a much more realistic way of describing uncertainty in variables of a risk analysis. Common probability distributions include:
Normal – Or “bell curve.” The user simply defines the mean or expected value and a standard deviation to describe the variation about the mean. Values in the middle near the mean are most likely to occur. Examples of variables described by normal distributions include inflation rates and energy prices.
Lognormal – Values are positively skewed, not symmetric like a normal distribution. It is used to represent values that don’t go below zero but have unlimited positive potential. Examples of variables described by lognormal distributions include real estate property values, stock prices, and oil reserves.
Uniform – All values have an equal chance of occurring, and the user simply defines the minimum and maximum. Examples of variables that could be uniformly distributed include manufacturing costs or future sales revenues for a new product.
Triangular – The user defines the minimum, most likely, and maximum values. Values around the most likely are more likely to occur. Variables that could be described by a triangular distribution include past sales history per unit of time and inventory levels.
PERT- The user defines the minimum, most likely, and maximum values, just like the triangular distribution. Values around the most likely are more likely to occur. However values between the most likely and extremes are more likely to occur than the triangular; that is, the extremes are not as emphasized. An example of the use of a PERT distribution is to describe the duration of a task in a project management model.
Discrete – The user defines specific values that may occur and the likelihood of each. An example might be the results of a lawsuit: 20% chance of positive verdict, 30% change of negative verdict, 40% chance of settlement, and 10% chance of mistrial.
During a Monte Carlo simulation, values are sampled at random from the input probability distributions. Each set of samples is called an iteration, and the resulting outcome from that sample is recorded. Monte Carlo simulation does this hundreds or thousands of times, and the result is a probability distribution of possible outcomes. In this way, Monte Carlo simulation provides a much more comprehensive view of what may happen. It tells you not only what could happen, but how likely it is to happen.
Monte Carlo simulation provides a number of advantages over deterministic, or “single-point estimate” analysis:
Probabilistic Results. Results show not only what could happen, but how likely each outcome is.
Graphical Results. Because of the data a Monte Carlo simulation generates, it’s easy to create graphs of different outcomes and their chances of occurrence. This is important for communicating findings to other stakeholders.
Sensitivity Analysis (Sandeep Kanao). With just a few cases, deterministic analysis makes it difficult to see which variables impact the outcome the most. In Monte Carlo simulation, it’s easy to see which inputs had the biggest effect on bottom-line results. (Sandeep Kanao)
Scenario Analysis (Sandeep Kanao): In deterministic models, it’s very difficult to model different combinations of values for different inputs to see the effects of truly different scenarios. Using Monte Carlo simulation, analysts can see exactly which inputs had which values together when certain outcomes occurred. This is invaluable for pursuing further analysis. (Sandeep Kanao)
Correlation of Inputs. In Monte Carlo simulation, it’s possible to model interdependent relationships between input variables. It’s important for accuracy to represent how, in reality, when some factors goes up, others go up or down accordingly.
Monte Carlo simulation is a computerized mathematical technique that helps to account for risk in quantitative analysis and decision making. The technique is widely used by in finance.
Monte Carlo simulation furnishes the decision-maker with a range of possible outcomes and the probabilities they will occur for any choice of action. It shows the extreme possibilities—the outcomes of going for broke and for the most conservative decision—along with all possible consequences for middle-of-the-road decisions.
How Monte Carlo simulation works - Sandeep Kanao
Monte Carlo simulation performs risk analysis by building models of possible results by substituting a range of values—a probability distribution—for any factor that has inherent uncertainty. It then calculates results over and over, each time using a different set of random values from the probability functions. Depending upon the number of uncertainties and the ranges specified for them, a Monte Carlo simulation could involve thousands or tens of thousands of recalculations before it is complete. Monte Carlo simulation produces distributions of possible outcome values.
By using probability distributions, variables can have different probabilities of different outcomes occurring. Probability distributions are a much more realistic way of describing uncertainty in variables of a risk analysis. Common probability distributions include:
Normal – Or “bell curve.” The user simply defines the mean or expected value and a standard deviation to describe the variation about the mean. Values in the middle near the mean are most likely to occur. Examples of variables described by normal distributions include inflation rates and energy prices.
Lognormal – Values are positively skewed, not symmetric like a normal distribution. It is used to represent values that don’t go below zero but have unlimited positive potential. Examples of variables described by lognormal distributions include real estate property values, stock prices, and oil reserves.
Uniform – All values have an equal chance of occurring, and the user simply defines the minimum and maximum. Examples of variables that could be uniformly distributed include manufacturing costs or future sales revenues for a new product.
Triangular – The user defines the minimum, most likely, and maximum values. Values around the most likely are more likely to occur. Variables that could be described by a triangular distribution include past sales history per unit of time and inventory levels.
PERT- The user defines the minimum, most likely, and maximum values, just like the triangular distribution. Values around the most likely are more likely to occur. However values between the most likely and extremes are more likely to occur than the triangular; that is, the extremes are not as emphasized. An example of the use of a PERT distribution is to describe the duration of a task in a project management model.
Discrete – The user defines specific values that may occur and the likelihood of each. An example might be the results of a lawsuit: 20% chance of positive verdict, 30% change of negative verdict, 40% chance of settlement, and 10% chance of mistrial.
During a Monte Carlo simulation, values are sampled at random from the input probability distributions. Each set of samples is called an iteration, and the resulting outcome from that sample is recorded. Monte Carlo simulation does this hundreds or thousands of times, and the result is a probability distribution of possible outcomes. In this way, Monte Carlo simulation provides a much more comprehensive view of what may happen. It tells you not only what could happen, but how likely it is to happen.
Monte Carlo simulation provides a number of advantages over deterministic, or “single-point estimate” analysis:
Probabilistic Results. Results show not only what could happen, but how likely each outcome is.
Graphical Results. Because of the data a Monte Carlo simulation generates, it’s easy to create graphs of different outcomes and their chances of occurrence. This is important for communicating findings to other stakeholders.
Sensitivity Analysis (Sandeep Kanao). With just a few cases, deterministic analysis makes it difficult to see which variables impact the outcome the most. In Monte Carlo simulation, it’s easy to see which inputs had the biggest effect on bottom-line results. (Sandeep Kanao)
Scenario Analysis (Sandeep Kanao): In deterministic models, it’s very difficult to model different combinations of values for different inputs to see the effects of truly different scenarios. Using Monte Carlo simulation, analysts can see exactly which inputs had which values together when certain outcomes occurred. This is invaluable for pursuing further analysis. (Sandeep Kanao)
Correlation of Inputs. In Monte Carlo simulation, it’s possible to model interdependent relationships between input variables. It’s important for accuracy to represent how, in reality, when some factors goes up, others go up or down accordingly.
Tuesday, 30 July 2013
What is dependency injection - Sandeep Kanao
What is dependency injection - Sandeep Kanao
Dependency Injection is a technique that decouples the consumer from the actual implementation during design/compile time and binds them at run time.
Based on - "Don't call us, we'll call you" - Hollywood principle...
Dependency injection is basically providing the objects that an object needs (its dependencies) instead of having it construct them itself. It's a very useful technique for testing, since it allows dependencies to be mocked or stubbed out.
public SomeClass() {
myObject = Factory.getObject();
}
This can be troublesome when all you want to do is run some unit tests on SomeClass, especially if myObject is something that does complex disk or network access.
public SomeClass (MyClass myObject) {
this.myObject = myObject;
}
This way, you can create a dummy myObject for unit testing.
ASP.NET MVC Pipeline - Sandeep Kanao
Step 1: The request goes through the ASP.NET stack and is handed over to the routing engine the first thing.
Step 2: Based on the route configuration, the routing engine looks for the appropriate controller. If the controller is found, it is invoked. If not found, a Controller not found is returned by the Routing engine.
Step 3: The Controller interacts with the Model as required. If there is incoming data, Model binding is done by ASP.NET MVC to make the incoming data into a strongly type Model if required.
Step 4: The model if invoked, retrieves or save appropriate data and returns to the controller.
Step 5: The controller then requests for a View with (or without) the data from Model. There may be one or more View engines registered so MVC Cycles through all the View engine until it finds one and renders the view. Then hands over the request to the ViewEngine which returns the Result to the Controller. The Controller send back the Result as a part of the HTTP response.
The takeaway in this diagram is that ASP.NET MVC is dealing with straight HTTP, there is no ViewState munging or other fancy state management in the pipeline
A more detailed view is available in Steven Sanderson’s famous chart (from RedGate’s site).
Dependency Injection is a technique that decouples the consumer from the actual implementation during design/compile time and binds them at run time.
Based on - "Don't call us, we'll call you" - Hollywood principle...
Dependency injection is basically providing the objects that an object needs (its dependencies) instead of having it construct them itself. It's a very useful technique for testing, since it allows dependencies to be mocked or stubbed out.
public SomeClass() {
myObject = Factory.getObject();
}
This can be troublesome when all you want to do is run some unit tests on SomeClass, especially if myObject is something that does complex disk or network access.
public SomeClass (MyClass myObject) {
this.myObject = myObject;
}
This way, you can create a dummy myObject for unit testing.
ASP.NET MVC Pipeline - Sandeep Kanao
Step 1: The request goes through the ASP.NET stack and is handed over to the routing engine the first thing.
Step 2: Based on the route configuration, the routing engine looks for the appropriate controller. If the controller is found, it is invoked. If not found, a Controller not found is returned by the Routing engine.
Step 3: The Controller interacts with the Model as required. If there is incoming data, Model binding is done by ASP.NET MVC to make the incoming data into a strongly type Model if required.
Step 4: The model if invoked, retrieves or save appropriate data and returns to the controller.
Step 5: The controller then requests for a View with (or without) the data from Model. There may be one or more View engines registered so MVC Cycles through all the View engine until it finds one and renders the view. Then hands over the request to the ViewEngine which returns the Result to the Controller. The Controller send back the Result as a part of the HTTP response.
The takeaway in this diagram is that ASP.NET MVC is dealing with straight HTTP, there is no ViewState munging or other fancy state management in the pipeline
A more detailed view is available in Steven Sanderson’s famous chart (from RedGate’s site).
Tuesday, 11 June 2013
Comparison of WCF & ASP.NET Web API- VS 2012 - Sandeep Kanao
Comparison of WCF & ASP.NET Web API- VS 2012 - Sandeep Kanao
WCF
Back-end Services
SOAP, WS-*
Transports: HTTP, TCP, UDP, Queues, WebSockets, custom
Message patterns: request-reply, one-way, duplex
Use WCF Web HTTP to add HTTP endpoints to existing WCF services
Use WCF Data Services for full OData support
ASP.NET Web API
Front-end Services
Media Types: JSON, XML, form-URL-encoded, custom
HTTP only
Request-reply only
REST, resource-centric
Use SignalR for asynchronous signaling (polling, long-polling, WebSockets)
WCF
Back-end Services
SOAP, WS-*
Transports: HTTP, TCP, UDP, Queues, WebSockets, custom
Message patterns: request-reply, one-way, duplex
Use WCF Web HTTP to add HTTP endpoints to existing WCF services
Use WCF Data Services for full OData support
ASP.NET Web API
Front-end Services
Media Types: JSON, XML, form-URL-encoded, custom
HTTP only
Request-reply only
REST, resource-centric
Use SignalR for asynchronous signaling (polling, long-polling, WebSockets)
Friday, 19 April 2013
WCF and Web Services - Sandeep Kanao
Webservice : How to access session variables - Sandeep KanaoDefine an attribute that indicates you require a session
[WebMethod(EnableSession = true)]
public void MyWebService()
{
Foo foo;
Session["MyObjectName"] = new Foo();
foo = Session["MyObjectName"] as Foo;
}
WCF : How to access session variables - Sandeep KanaoSet aspNetCompatibilityEnabled = true inside system.ServiceModel | serviceHostingEnvironment
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)]
On client
<basicHttpBinding>
<binding name="SessionBinding" allowCookies="true">
</basicHttpBinding>
Solution 2 :
Use OperationContext
WCF : How to access web service in WCF service - Sandeep KanaoLike asp.net - by creating webservice proxy class or by adding reference
Major difference between WCF and Webservice - Sandeep Kanao is that Web Services use XmlSerializer but WCF uses DataContractSerializer which is better in performance as compared to XmlSerializer. Some key issues with XmlSerializer to serialize .NET types to XML are:
* Only Public fields or Properties of .NET types can be translated into XML.
* Only the classes which implement IEnumerable interface.
* Classes that implement the IDictionary interface, such as Hash table can not be serialized.
Important difference between DataContractSerializer and XMLSerializer:
* A practical benefit of the design of the DataContractSerializer is better performance over Xmlserializer.
* XML Serialization does not indicate the which fields or properties of the type are serialized into XML where as DataCotratSerializer Explicitly shows the which fields or properties are serialized into XML.
* The DataContractSerializer can translate the HashTable into XML
[WebMethod(EnableSession = true)]
public void MyWebService()
{
Foo foo;
Session["MyObjectName"] = new Foo();
foo = Session["MyObjectName"] as Foo;
}
WCF : How to access session variables - Sandeep KanaoSet aspNetCompatibilityEnabled = true inside system.ServiceModel | serviceHostingEnvironment
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)]
On client
<basicHttpBinding>
<binding name="SessionBinding" allowCookies="true">
</basicHttpBinding>
Solution 2 :
Use OperationContext
WCF : How to access web service in WCF service - Sandeep KanaoLike asp.net - by creating webservice proxy class or by adding reference
Major difference between WCF and Webservice - Sandeep Kanao is that Web Services use XmlSerializer but WCF uses DataContractSerializer which is better in performance as compared to XmlSerializer. Some key issues with XmlSerializer to serialize .NET types to XML are:
* Only Public fields or Properties of .NET types can be translated into XML.
* Only the classes which implement IEnumerable interface.
* Classes that implement the IDictionary interface, such as Hash table can not be serialized.
Important difference between DataContractSerializer and XMLSerializer:
* A practical benefit of the design of the DataContractSerializer is better performance over Xmlserializer.
* XML Serialization does not indicate the which fields or properties of the type are serialized into XML where as DataCotratSerializer Explicitly shows the which fields or properties are serialized into XML.
* The DataContractSerializer can translate the HashTable into XML
Tuesday, 2 April 2013
Market Risk vs Credit Risk - Sandeep Kanao
Market RiskPotential loss in market value of our position if a market risk factor goes against our position. VaR is the common measure used
Credit Risk
Credit Risk = Exposure x Credit Worthiness x Severity
Exposure
Potential loss as a result of counterparty default
Credit Worthiness
Probability of default or credit migration
Severity
Fraction loss give default
Severity = 1 – Recovery Rate
Methods for Measuring Counterparty Exposure - Sandeep Kanao
Portfolio Simulation MethodSimulate multiple scenarios of future values of risk factors. E.g. FX rates, Interest Rates, Commodity and Equity prices…
Value each deal in the portfolio using simulated market risk factors as input to pricing models
Aggregate counterparty exposure using appropriate netting rules, margin and collaterals agreements
Calculate the exposure measures
Confidence level exposure (e.g. 95%)
Expected Exposure and Effective Exposure
Market RiskPotential loss in market value of our position if a market risk factor goes against our position. VaR is the common measure used
Credit Risk
Credit Risk = Exposure x Credit Worthiness x Severity
Exposure
Potential loss as a result of counterparty default
Credit Worthiness
Probability of default or credit migration
Severity
Fraction loss give default
Severity = 1 – Recovery Rate
Methods for Measuring Counterparty Exposure - Sandeep Kanao
Portfolio Simulation MethodSimulate multiple scenarios of future values of risk factors. E.g. FX rates, Interest Rates, Commodity and Equity prices…
Value each deal in the portfolio using simulated market risk factors as input to pricing models
Aggregate counterparty exposure using appropriate netting rules, margin and collaterals agreements
Calculate the exposure measures
Confidence level exposure (e.g. 95%)
Expected Exposure and Effective Exposure
Tuesday, 29 January 2013
Architectural design for federated Corporate Credit Commercial Capital Market Banking Applications using Service Oriented Architecture (SOA) and Enterprise Service Bus - Sandeep Kanao
Architectural design for federated Corporate Credit Commercial Capital Market Banking Applications using Service Oriented Architecture (SOA) and Enterprise Service Bus - Sandeep Kanao
Goal :
One of the major banks in Canada has several existing corporate commercial applications (CCL) in ASP and ASP.NET. With the bank mergers, new application was initiated to cater need for small business (< 50 M). This application needs information from the existing CCL application suits.
Since all these corporate commercial applications (CCL) are stable and hosted/maintained by several different departments, we cannot modify or make any changes. The goal of the new project is to develop federated security services, so user can be authenticated on all the existing CCL web applications using the same token and consume the required services. This is achievable by service oriented architecture (SOA) and enterprise service bus. Application must supports dynamic resolution of endpoint (Intelligent Routing) for any request by the new application.
Interoperability needs
The 'services' look to be more like asp pages that provide access to functionality in an API-like fashion. What is lost is the uniform programmatic way to access these 'APIs' (since each page could come with its own specific way to interact with it). We can build some adaptors that hide the particularities (wrap the calls to the ASP pages in web services. That way we can hide the call to the ASP page and the parameter passing behind a nicer programmatic interface).
Data integrity needs
Here we potentially have a BIG problem, provided that we are dealing with distributed transactions against multiple resource managers/databases. In very practical terms, we have to ask our self how to rollback changes when our APIs come in the form of ASP pages, that obviously cannot enlist in any distributed transaction.
If distributed transactions is not needed, we can create another integration level with different apps (as for example at the middle tier level or database level). Some of these 'services' may not allow to reuse the functionality.
Application layer uses :
Neudesic-Neuron, WCF4, MSMQ Appfabric to supports dynamic resolution of endpoints (Intelligent Routing)
Friday, 11 January 2013
In Memory Cloud Datagrid Technologies - Capital Market VaR generation - Sandeep Kanao
In Memory Cloud Datagrid Technologies - Capital Market VaR generation - Sandeep Kanao
The move to in-memory is all about achieving the best performance, by accessing data held in a server's random access memory (RAM) as opposed to on a hard disk. Typical access speed for RAM is 0.4 nanoseconds, whereas for disk it's 4 milliseconds - so RAM is 10 million times faster.
A number of in-memory cloud data grid offerings are available, including:
We wanted to try in-memory database for the VaR simulation. VaR engine is hosted on Solaris (2 T44) Solaris boxes with 128 cores each. Application is written in C/C++ (64 bit) and database is Sybase. VaR simulation runs takes 8 hours. The goal is to support at least three VaR within the stipulated window (8 hours) along with performance improvements in the current logic.
We have identified the performance bottleneck within the application. It is found that VaR simulation as well as scenario read/write to Sybase database takes maximum amount of time (over 30%). As a result we evaluated following three in-memory datbases :
The move to in-memory is all about achieving the best performance, by accessing data held in a server's random access memory (RAM) as opposed to on a hard disk. Typical access speed for RAM is 0.4 nanoseconds, whereas for disk it's 4 milliseconds - so RAM is 10 million times faster.
A number of in-memory cloud data grid offerings are available, including:
- ActiveSpaces from Tibco Software
- Oracle's Coherence
- Armanta's Intelligence Services
- GemFire from VMware
- Quartet FS's ActivePivot in-memory analytics
- SAP's High Performance Analytic Appliance (HANA)
- ScaleOut Software's StateServer
- GigaSpaces Technologies XAP platform
- BigMemory from Software AG's Terracotta unit
- Kognitio's In-Memory Analytics Platform
- GridGain's In-Memory Compute and Data Grid
- Open source Memcached
- NCache (.NET)
- Microsoft Appfabric (.NET)
We wanted to try in-memory database for the VaR simulation. VaR engine is hosted on Solaris (2 T44) Solaris boxes with 128 cores each. Application is written in C/C++ (64 bit) and database is Sybase. VaR simulation runs takes 8 hours. The goal is to support at least three VaR within the stipulated window (8 hours) along with performance improvements in the current logic.
We have identified the performance bottleneck within the application. It is found that VaR simulation as well as scenario read/write to Sybase database takes maximum amount of time (over 30%). As a result we evaluated following three in-memory datbases :
- Activespaces from tibco (installed on the same rack as the client) on 2 - T44 Solaris boxes with 64GB memory each
- GemFire from VmWare (installed on the same rack as the client) on 2 - T44 Solaris boxes with 64GB om memory each
- SAP Hana, installed on the data grid - 3 - T44 Solaris boxes with 64GB memory each
Subscribe to:
Posts (Atom)