0
I Use This!
Activity Not Available

News

Posted almost 13 years ago
Last week I sat down with TabbFORUM to talk about the ways regulation is changing the way hedge funds and investment banks do business - and how technology can help address some of those changes. Among other things, we discussed the need to optimize ... [More] your IT budget, the age-old buy vs. build conundrum, and transparency. Watch the whole interview below:   [Less]
Posted almost 13 years ago
When OpenGamma was started by Kirk Wylie, Elaine McLeod and me, our approach to recruitment was primarily driven by our own recruitment experiences. We were all recruited into the expectations of the Hedge Fund environment: operating in a ... [More] high-pressure environment with an extremely high expected level of professionalism and technical excellence. Later on, we were also involved in interviewing many other candidates for roles operating alongside us. In this post, I'm going to describe the evolution of the recruitment process at OpenGamma and what it looks like today, specifically focusing on Platform Development roles. How our recruitment process has evolved over time The first interviews at OpenGamma were based upon the previous ‘Hedge Fund’ model. To some degree, with hindsight, our interviews tended to be rather too close to a process of attrition. In a desire to both put the candidate under pressure to see how they operated in a high-stress situation, and to maximise the use of a candidate's time (taking time off work can be difficult to repeat for a candidate), we tended to do ‘marathon’ interviews of 5-6 hours, with staff members each taking an hour-long slot. We also early on had a policy of every team member interviewing new candidates. This process was not sustainable. We first introduced phone interviews, so we didn’t waste our time and the time of candidates who we didn’t believe were a good fit. After the team had expanded to a certain point, we could also no longer sustain the policy of every team member interviewing new candidates. There were a couple of reasons for this. Firstly, the sheer number of people involved became unmanageable. We began to double up staff in interviews, with typically one person leading the interview and another more passive. This meant we could get more people in the process. But every time you add a person to the process, it becomes harder to accept a candidate (an important part of our interview methodology is that every interviewer can give a “No” to any candidate; we always ask the interviewer to justify this decision, but at the end of the day the people who are interviewing will have to work with the candidate and they need to feel comfortable with working with him or her for a long time). Even a really excellent candidate has a small probability of being rejected by a seemingly rational interviewer. When you multiply these probabilities by 6, 8 or even 10, you increase the risk of rejecting perfectly reasonable candidates. This did in fact happen on at least one occasion. After rejecting one candidate, they kept coming to mind and no one could give a good reason why they were rejected - so six months later, we offered them the job (they’ve now been with us for over a year). Lastly, we stopped the really long interviews in most cases. Part of our reason for trying to perform interviews in one day was because we had ourselves been irritated by the two, three or even more interviews typically required in the financial industry’s recruitment process. It can be tough for a candidate to get that much time off work without arousing suspicion. We therefore still give some candidates the option of packing the interviews together, particularly if they have travelled from further afield to attend. We also weren’t sure that screening out people who weren’t capable of operating in a super high-pressure environment was necessarily always a great idea. OpenGamma is not a hedge fund after all, and we pride ourselves on giving employees a pleasant working environment, with flexible working hours and so on. So while we do need people who can think fast on their feet and work to fix customer problems quickly, we also have room for those who are excellent, but perhaps look to longer term issues like system design and evolution rather than quick fixes. ...And what it looks like today So, after that process of evolution, we’ve fallen into a predictable pattern: CVs come to us via either: One of the few recruitment agents on our approved list (which is very short, and we are not looking to add to this list at the moment). Directly emailed to jobs at opengamma dot com. Events such as Silicon Milkroundabout. Personal introductions and recommendations. The appropriate head of the team will review CVs to provide a short list. Often this is sporadic, and based on a specific requirement to hire in the short-to-medium term. In the meantime, we aim to let prospective candidates know the situation, although sometimes the volume of applications (particularly where they are wholly inappropriate “scatter gun”) means we can’t - apologies to anyone reading this who didn’t get a response. At this point we lose around 90-95% of candidates. This shortlist is then invited to a telephone interview. This is generally a short (20-60 minute) phone call with the candidate, just to get an idea of their personality, to relate more information about the role on offer and to judge their expectations. This is often surprisingly light on technical or job-specific questions. At this point we probably lose around 20-50% of the remaining candidates. We then ask the remaining candidates to do a code sample. This takes the form of a standard programming task (typically in Java), and we evaluate the candidate on the basis of their interpretation. We get quite a wide range of ability shown at this stage, but generally the standard is very high. Over the years we’ve come to take certain characteristics of this test as good pointers of a candidate’s way of thinking and level of ability. Even more interestingly, we provide absolutely no guidance on what we’re looking for to the candidate or (even more importantly), to the recruiter (if appropriate). Even with that lack of guidance, we find that successful candidates all tend to include the same things we’d expect, without us having to tell them. This gives us a feel for how candidates are able to code in a self-directed manner without the stress of the whiteboard-programming interview. We then ask the candidate in for a first round of interviews. This typically lasts around two hours, with one or two interviewers in each hour-long slot. They typically focus on domain-specific problem solving (“How would you do X?”). For development that is generally algorithmic although may involve knowledge about specific hardware or software techniques. We also often ask very difficult questions on the edge of the candidate's area of expertise to see how they react to solving problems they’re not necessarily comfortable with. One thing we’re always careful to point out is that given the strength of the technical team here, for anything on your CV, chances are there will be an OpenGamma employee who’s at world-expert level in that thing. We always advise people to either exclude things they’re not comfortable with, or at least indicate their relative skill level (so that we don’t assume someone who has done a little bit of SQL can handle a technical interview with Kirk, who started his career doing nothing but database internals development). After the interviews, the head of department will ask each interviewer what they thought separately. Discussion between interviewers is discouraged before this point as it tends to introduce a sort of ‘group-think’ confirmation bias where people who privately would endorse a candidate would be influenced by other group members to change their mind. We then decide whether to bring the candidate in for a second round of interviews. This typically includes another one to two hours of technical interviews and a final interview with someone senior such as myself. At this point Kirk always interviews every candidate to make sure people have time to ask questions about the business side of the equation, as well as to make sure that all developers will be a good fit. After all, we’re not just choosing candidates, they’re choosing us! At this point we decide whether to continue and extend an offer to the candidate or to pass. In conclusion, I think the changes we have made over the lifetime of the company have improved the process of finding great people to join us at OpenGamma. It now takes us less time to find suitable candidates, puts less time pressure on candidates, and wastes less time of people who aren’t a good fit for us. Be under no illusions, we still have very high standards, and we’ve been told that we’re the hardest interviewing process in London at the moment. But if you think you can make the grade, and relish the opportunity to work with some of the top technologists in the UK, apply today! We’ll be attending Silicon Milkroundabout in London at the end of the month. It’s a great opportunity to meet a number of exciting start-ups, all in recruitment mode. If you are interested in learning more about working at OpenGamma, find us on stand 48. [Less]
Posted almost 13 years ago
Because OpenGamma's approach is quite different from a typical vendor in the financial technology space, we've sometimes found it challenging to get our philosophy across to the wider audience. If a picture is worth a thousand words, we thought a ... [More] video must be even better - and decided to put together a quick animation explaining our approach to modern risk and trading analytics. So, without further ado, may I present: Update: If you cannot view Vimeo videos for some reason, view the animation on YouTube instead. [Less]
Posted almost 13 years ago
April has been a hectic month at the OpenGamma HQ thanks to our 1.0 Platform release, but now that it's been released, we thought it's about time we organise another OpenGamma OpenHouse. We'll be opening our doors, serving up some food and drinks ... [More] , and firing up the demo servers to showcase the new OpenGamma Platform on Thursday, May 3rd, 2012. Now that the 1.0 release is finally out, there’s lots of exciting stuff to demo. If you’d like to see our R module, the Excel integration, some of our other non-Open Source GUI tools, or the open source Bloomberg module in action, this is your chance to have a chat with the developers behind those features (and enjoy a few beers while you’re at it). Even better, we've got more stuff that isn't in 1.0, so you get a sneak peak at what's likely to be in 1.1 (and beyond!). Our previous OpenHouse was in late 2010, so there have been a lot of changes in the company since then: we’ve moved offices and doubled the size of the team here in London. (So please don't show up at the old warehouse above the Kirkaldy Test Museum; we've moved to Park Street next to the Tate Modern). So come along to meet the team, see the Platform in action, and ask any questions you may have. You can just come by our offices in Southwark after work on Thursday, 3rd May, but we'd like you to register on Eventbrite (Password: ValueAtRisk) so that we have a rough idea of who's coming, how much food to order, and, perhaps most importantly, how many beers to have ice cold! We hope to see as many of you from the London area as possible there! If you can't make it next week, we'll be organising another OpenHouse some time in June (follow us on Twitter to be among the first to hear about it). [Less]
Posted almost 13 years ago
The fourth annual R/Finance conference will take place in the Windy City next month. Aimed at users of R, the open source programming language for statistical computation and graphics, the event focuses on using R as a primary tool for financial risk ... [More] management, analysis and trading. (To learn more about R and its exponential growth in the past few years, check out this excellent white paper by Revolution Analytics.) Two members of the OpenGamma team attended last year’s conference, and talked about what early adopters of the OpenGamma Platform wanted us to do with our R Integration Module. The response was so positive that we pushed forward on the implementation (now available as part of the 1.0 release), and decided to join as one of the sponsors of the R/Finance conference. We’ll be showcasing the R integration on the first day of the conference. Additionally, Ana Nelson, a documentation consultant for OpenGamma, will be speaking on financial reporting and documentation using R and Dexy on the second day. Working with some of the foremost quants in the industry (like Andrew Rennie, the former global head of analytics at Merrill Lynch), we’ve learned that some of the advanced portfolio analytics required by modern quantitative trading strategies are ideally suited to the R programming environment. Using the OpenGamma R Integration Module analysts can create custom stresses and scenarios, using R’s rich ability to perform statistical perturbations on market data and security terms, all fully integrated with the rest of the OpenGamma Platform. Even better, these calculations all happen on the same server-side infrastructure used by the rest of your installation: keeping your workstation free for the work that has to be done on it. We believe in supporting the tools that quantitative finance practitioners want to use. That explains our deep commitment to Excel, now R, and forthcoming integrations such as Python and MATLAB. We don’t believe that you should have to use our GUIs to do your job. You’ll no doubt find many of us hanging out in the conference lobby and demoing the Platform - come and say hi. [Less]
Posted almost 13 years ago
Financial analytics are all about the math, and documenting financial analytics means making the math pretty, which on the internet can be an ugly thing to do. LaTeX has been the standard way to make math pretty in print since the 1980s, but making ... [More] math pretty online has always been contentious and problematic. Projects like MathML have tried to help, but haven't really taken off because in order to display: a ⁢ x 2 b ⁢ x c You need to type: <math xmlns="http://www.w3.org/1998/Math/MathML"> <mrow> <mi>a</mi> <mo>&#x2062;<!-- &InvisibleTimes; --></mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo> </mo> <mi>b</mi> <mo>&#x2062;<!-- &InvisibleTimes; --></mo> <mi>x</mi> <mo> </mo> <mi>c</mi> </mrow> </math> Example from MathML wikipedia page. By contrast the LaTeX equivalent for this expression is: $ax^2 bx c$ Apart from the verbosity of the MathML markup, until very recently it wasn't safe to assume that most browsers would fully support HTML5, which includes MathML support. So, when OpenGamma began to look for a way to document the mathematics they were implementing, they looked for a LaTeX-based option and found it in LaTeXlet, a custom Taglet for javadocs. This taglet looks for specially formatted @latex tags in Javadoc comments, renders them as LaTeX, and inserts the rendered math into the generated Javadocs as images. The use of images is problematic for several reasons: they take extra time and processing power during the Javadoc build phase, latex must be installed on each machine that needs to generate Javadocs (this is not a trivial requirement), the generated images can be on the ugly side and, more importantly, they are very unaccessable. LaTeXlet further requires that you add an extra slash to each of the latex commands that you call. Here is an example of a Javadoc comment marked up for LaTeXlet: /** * Calculates the historical covariance of two return series. The covariance is given by: * {@latex.ilb %preamble{\\usepackage{amsmath}} * \\begin{eqnarray*} * \\frac{1}{n(n-1)}\\sum\\limits_{i=1}^n (x_i - \\overline{x})(y_i - \\overline{y}) * \\end{eqnarray*}} * where {@latex.inline $x$} is the first return series, {@latex.inline $y$} is the * second return series and {@latex.inline $n$} is the number of data points. */ public class HistoricalCovarianceCalculator extends CovarianceCalculator {} There are a lot of tags in there. And while LaTeXlet will interpret "\\frac" as "\frac", any other LaTeX compiler will be unable to make sense of this. It's also a huge amount of extra typing to apply LaTeX-style formatting to the individual variables, i.e. {@latex.inline $x$}. And because they are replaced by images, it renders the whole sentence unaccessible. While LaTeXlet had the advantage of being slightly less cumbersome than MathML, and allowed quants already familiar with LaTeX to use a tool they were proficient in, it was far from ideal, but was probably the best option available at the time. Now, happily, there is an alternative which isn't a least-bad option, and which is making it virtually painless to put beautiful math on the web. The MathJax javascript library allows you to insert plain old LaTeX in your HTML pages and it will render this LaTeX unobtrusively in a beautiful, accessible and standards-compliant way. Because you are writing standard LaTeX without any extra characters, you can also easily use this LaTeX in other PDF documentation, or paste it into tools like mathb.in to get a quick preview. Cleaning up the documentation was one of the goals for the 1.0 release of the OpenGamma Platform. After we verified that we could apply MathJax to our generated Javadocs, I then had to solve the problem of converting almost 200 existing files with LaTeXlet-style markup into the simpler format we would use going forward. I wanted to find a way to automate the process, not only for the obvious drudgery it would otherwise involve, but because the probability of making mistakes in manually editing such content was very high. I started working on a sed script which would convert the double slashes to single slashes, get rid of the unnecessary @latex tags and their associated brackets and replace them with math mode declarations like $$: s/\s*%preamble{.*}}//g s/{@latex.inline \(\$[^$]*\$\)}/\1/g s/{@latex.ilb/$$/ s/end{eqnarray\*}[\s\*]*}\s*/end{eqnarray*}\n * $$/ s/end{align\*}[\s\*]*}\s*/end{align*}\n * $$/ s/\\\(\\\w\)/\1/g This will give output like: /** * Calculates the historical covariance of two return series. The covariance is given by: * $$ * \begin{eqnarray*} * \frac{1}{n(n-1)}\sum\limits_{i=1}^n (x_i - \overline{x})(y_i - \overline{y}) * \end{eqnarray*} * $$ * where $x$ is the first return series, $y$ is the * second return series and $n$ is the number of data points. */ public class HistoricalCovarianceCalculator extends CovarianceCalculator {} I developed this script by first committing all my other changes so I was starting with a clean git repository, and then iterating the sed script by running: cd /mnt/work/OG-Platform/projects/ git checkout HEAD . find . -name "*.java" -exec sed -f /mnt/work/blog/remove-latexlet.sed -i {} \; git diff . | less I would review the diff and then modify the sed script to improve its recognition. I repeated this several times until I was happy enough. This script isn't perfect, it sometimes misses the closing } of the @latex.il environments, and it also was a little overzealous in converting all instances of \\ to \ rather than just those in math mode chunks. Here's another Javadoc source example: /** * A simple chooser option gives the holder the right to choose whether the * option is to be a standard call or put (both with the same expiry) after a * certain time. The exercise style of the option, once the choice has been * made, is European. * <p> * The payoff of this option is: * {@latex.ilb %preamble{\\usepackage{amsmath}} * \\begin{align*} * \\mathrm{payoff} = \\max(c_{BSM}(S, K, T_2), p_{BSM}(S, K, T_2)) * \\end{align*} * } * where {@latex.inline $c_{BSM}$} is the general Black-Scholes Merton call * price, {@latex.inline $c_{BSM}$} is the general Black-Scholes Merton put * price (see {@link BlackScholesMertonModel}), {@latex.inline $K$} is the * strike, {@latex.inline $S$} is the spot and {@latex.inline $T_2$} * is the time to expiry of the underlying option. */ public class SimpleChooserOptionDefinition extends OptionDefinition {} And here it is after running 'sed', you can see it misses the closing } on the line after \end{align*}: /** * A simple chooser option gives the holder the right to choose whether the * option is to be a standard call or put (both with the same expiry) after a * certain time. The exercise style of the option, once the choice has been * made, is European. * <p> * The payoff of this option is: * $$ * \begin{align*} * \mathrm{payoff} = \max(c_{BSM}(S, K, T_2), p_{BSM}(S, K, T_2)) * \end{align*} * } * where $c_{BSM}$ is the general Black-Scholes Merton call * price, $c_{BSM}$ is the general Black-Scholes Merton put * price (see {@link BlackScholesMertonModel}), $K$ is the * strike, $S$ is the spot and $T_2$ * is the time to expiry of the underlying option. */ public class SimpleChooserOptionDefinition extends OptionDefinition {} However, I knew I wanted to manually check each file before committing the changes anyway, so I would be able to fix these issues and tidy up any other little niggles that I spotted. It wasn't worth spending more time on the .sed script, instead I needed to find out how I was going to identify all of the changed files and make sure I checked each of them. I use vim as my text editor. I hoped that a solution might be found using some sort of plugin for vim which would let me systematically work through the list of modified files generated by running "git status". Happily, I found exactly such a tool in fugitive.vim. After I installed it using pathogen.vim (by the same author), and watched the first and second vimcasts about fugitive, I quickly had a very efficient workflow going. I would run :Gstatus to open a new buffer listing all the modified files in the repository - which were the files that had been changed by running the final version of the sed script. I type return while over the first file in the "Changes not staged for commit:" list, which would open that file for editing. Then I would run :Gdiff on that file which showed me the current changes using vimdiff. Then I could tweak the Javadoc in the working copy, fixing anything that the sed script missed and making any other small edits. When I was finished I would save my changes, return to the status window and type a -, which would add the file I had just been editing to the "Changes to be committed:" list. At this point the file would disappear from the "Changes not staged.." list and the cursor would already be positioned over the next file, so I just had to type return to start editing it, and the process begins again. Here's a picture of what it looked like: With this approach it was very fast and straightforward for me to work my way through the list of modified files, and I could be confident that I had checked every file before adding it to the list for the next commit. I could also exit vim at any time and easily pick up where I left off later by issuing another :Gstatus command. I did this a few times to make sure that I didn't have any files lurking with unsaved changes, and to generate the Javadocs from outside of vim. The result is a much cleaner javadoc build, several dependencies removed from the Ivy repository and Ant configuration files, happier developers who can much more easily update and input mathematical descriptions, and beautifully rendered LaTeX that can be re-used in multiple contexts, both for print and web. Mathematical equations show up in the OpenGamma codebase to describe statistical distributions like the Generalized Pareto Distribution, functions such as those used to add, subtract, multiply and divide curves, calculators like the JensenAlphaCalculator and SharpeRatioCalculator, and of course models such as the BlackScholesMertonModel and the BjerksundStenslandModel. [Less]
Posted almost 13 years ago
In our experience, as Quantitative Analysts, Risk Managers, Back-Office Officers or Traders, we've all one day or another tried to look up a small detail about a very familiar instrument without finding it. Does Euribor use the end-of-month rule? ... [More] What is the standard payment frequency for a three-year AUD swap? What is the last trading date of a mid-curve option on Liffe? Those questions may sound familiar. Often, the only way to find answers is to ask your colleagues, search the internet or call a counterpart. Everybody is supposed to know about them, but, to the best of my knowledge, and to my longstanding despair, they are not available in one unique, easily accessible place. After ten years of looking for those details, trying to remember them only to forget them the next week and look for them again, I decided not to trust my memory anymore and to write them down. The writing did not happen overnight, not even in a fortnight. The details were collected over almost one year while we were implementing the different instruments in our libraries and creating test portfolios. Now that I have them in writing, when a colleague or a friend asks me for those details, I just have to find the right section and copy and paste it. Introducing the Interest Rate Instruments and Market Conventions Guide At OpenGamma, we have decided to go one step further: shape this information in the form of a booklet and make it available to everyone in the industry. Being an Open Source company, we have the tradition to distribute widely what we do when we believe it could be useful for others. Nowhere in the document do I describe pricing or valuation mechanisms, even for the simplest instruments. The link to valuation is that any valuation technique for any instrument presented should include all the relevant instrument features. Most of the standard books and articles smooth the roughness of real life. Settlement lags are nowhere to be seen; day counts and business day conventions are supposed to appear magically, when they are mentioned at all. We all know that nothing appears magically and that there is no such thing as a free lunch. I do not offer you any of those free lunches, but hopefully the document can help you find the salt and pepper to season your own lunch. The goal of the document is to present conventions and market standards for the most common interest rate instruments. I have done my best to collect the information and check it. Obviously the document is not perfect and we plan to add, complement, or correct it when necessary. Do not hesitate to suggest corrections and additions. The market standards are relative, and they evolve. For the same instrument, two groups of people may use different conventions. This is the case with USD swaps: some use an annual money market basis on the fixed leg, and others semi-annual bond basis. The standards evolve; this is the case with swaptions for which the standard changed from an up-front premium to a forward premium in September 2010. The document is certainly not intended to be read from start to end like fiction. If quantitative finance is to be compared to a novel, this booklet would be the introduction of the main characters. It is a reference document and I expect users to read at most one chapter at a time, and more often one section or even one single line. This is also the way it was written, adding lines, currencies, and instruments when they were required. You can download the booklet here. It is published under a Creative Commons license (CC BY 3.0), so you are free to use it in any form and redistribute it. However, we do ask that you indicate that the source is the OpenGamma Interest Rate Instruments and Market Conventions Guide. If you want to pass it to someone, we'd suggest that you point to the original document so one always has the latest version (and the credit is given where due). The current version is named 1.0, just like the version of the OpenGamma Platform just released on 2nd April. The conventions described in this booklet are naturally already implemented in the OpenGamma Platform and our Analytics Library. If you haven't already adopted it, we highly recommend downloading the fresh new Platform available since Monday! The devil is in the details. [Less]
Posted almost 13 years ago
The 1.0 version of the OpenGamma Platform is finally out. It’s been in the making for longer than anticipated, but we believe it was essential to wait until we have all the main components ready to ship. In this blog post, I’ll briefly go over the ... [More] most significant changes. If you can’t wait to see it for yourself, head over to our developers’ site to download the Platform. Bloomberg Integration After Bloomberg’s recent announcement to open source its API, we’ve been able to include the Bloomberg module in our 1.0 release. This will allow you to load automatic reference data for exchange-traded securities, as well as load and update time series. Our integration module also now Open Sources our OpenGamma Live Data Adapter for pulling real-time streaming data into your OpenGamma environment from your Terminal, SAPI, or Managed B-Pipe instance. You can now download the Platform, hook it up to your existing Bloomberg infrastructure, and be doing live trading and risk analytics with real world data in a matter of minutes! This means that when evaluating, you have two basic options. You can choose the classic “Examples” package, which works with entirely mock data, just as in 0.9. However, if you have access to a Bloomberg Terminal, SAPI, or Managed B-PIPE instance, you can use the “BloombergExamples” package to work with real data and get your own portfolios up and running in the system in a matter of minutes. New HTML5 Web GUI Our HTML5 Web GUI continues to improve. While we’re busy rewriting our Web Analytics Viewer from scratch (to make it easier to include in your custom portals and applications), there have been a number of improvements to the main Open Source Platform GUI: The Web Analytics Viewer now supports Market Data Snapshots as well as different Live Data configurations as data sources and fallback to Historical Time Series for market data; It also supports dynamic re-aggregation of your portfolio on the fly - without reconfiguring your View Definition; Many of our Data Master viewers (like Portfolios, Positions, and Securities) are far better integrated, and can pull in key historical time series directly into the viewer for Securities to make it quicker to do common tasks. Finally, our whole GUI has been retrofitted with our new “Push REST” capabilities to notify all clients of data changes whenever they happen anywhere in your environment: reloading to get changes other people on a desk have made is a thing of the past. Database Masters We continue to improve our Database Masters. Aside from numerous performance improvements throughout (particularly our Historical Time Series database schema and Master code), we’ve completely rewritten our Batch Risk database and masters to allow far more types of data to be stored and queried with the most complex of View Definitions. Our data masters have also been enhanced to allow runtime tagging of Securities, Trades, Positions, and Portfolios. This allows you to put in any custom attributes required on any of these data types, and incorporate that with other new elements like our expression language for dynamic portfolio filtering and aggregation exactly as the “native” data fields. Asset Classes and Analytics The OpenGamma Platform continues to extend its support for new asset classes and analytical methods. Perhaps the most important change since 0.9.0 has been the amount of time that our Quantitative Development team has spent on making sure that we have a single source of all major market conventions for the G8 (and nearly the whole of the G20). Not only have we incorporated this into the Platform so that we have accurate cashflow determinations for assets in these currencies, but we have a booklet (forthcoming) that has all this information in one place as a guide you can use for your own development. Notable new assets include: Caps/Floors (including CMS) Inflation products Additional types of IR Swaps OIS Equity Variance Swaps FX Futures Digital FX Options View the full list of asset classes currently covered (PDF) We’ve also extended the types of analytical methods available, in particular enhancing the number of different analytical methods available for IR Swaptions and applying our Local Volatility and SVI models to multiple asset classes. But no matter how good our analytics library or data management may be, there will always be times when you won’t want to use it, but have a system that can generate sensitivities you want to aggregate with the rest of your portfolio (common examples are credit derivatives, ABS, and RMBS). To handle that, we now have full support for External Sensitivities. You can now create an External Sensitivities Security, assign the appropriate risk factors to it, and use the following features: Yield curve sensitivity mapping Separate yield curve/credit/all sensitivities buckets DV01, CS01, Historical VaR R Integration Module We’ve found a large number of quants and risk managers using the R statistical programming environment to drive deep and custom statistical analysis of their portfolios. In keeping with our dedication to supporting the tools that end-users actually want to use, we have taken our industry-leading Excel Integration Module, stripped away the parts that aren’t Excel specific, created the OpenGamma Language Integration package, and used that to provide the same level of extremely deep, useful integration with the R statistical programming environment. Everything that you would expect by now from a tight OpenGamma integration is there: you can pull in all types of data available in your OpenGamma Platform instance, and they all appear as native R objects; you can drive shocks and stress tests and historical simulations, including perturbing market data at either the individual ticker or tensor level. You can even create custom trades and portfolios from R to do what-if scenarios. We think this is extremely powerful, and we’re thrilled that this is in the Open Source OpenGamma Platform! (also, we’re a sponsor of the R/Finance Conference in Chicago in May; if you’re in the area come on by and say hi!). 04/04/2012 Update: The R Installer is now available on the downloads page. Other Updates Perhaps the most immediate change developers will notice is the improved build system – it’s actually now a single-step process. We’re confident that you’ll find deployment a whole lot easier. We’ve also included better ant target names. For those of you evaluating the platform independently, we’ve included sample data for new asset classes, and improved data import/export: there is a standard import format for security/portfolio data, as well as a framework for custom importers. Finally, we’ve completely overhauled the configuration system. You’ll find that deployment and maintenance have been made significantly user-friendlier, using the distributed component management system. As ever, we welcome your feedback; please add your comment below. For any technical questions, we recommend contacting our technical team through our forums. A special thank you to those of you already evaluating the Platform who have posted questions, comments and suggestions on the forums over the past months. Download OpenGamma Platform 1.0 Request a demo Browse documentation [Less]
Posted almost 13 years ago
Today OpenGamma has released version 1.0 of our flagship technology stack, the OpenGamma Platform. While Jim, our Head of Platform Development, has details on the technical side, I wanted to talk about where this takes us as a company and ecosystem. ... [More] When we set out on the OpenGamma journey, our mission was simple: to make the same standard of tools available to the most sophisticated market participants available to everyone. In a world filled with black boxes, we wanted to bring radical transparency. In a market where way too many firms keep rebuilding the same systems, we wanted to allow developers to focus on what’s proprietary about their firms. It's taken us 2.5 years, over 25 developer years, 11,000 test cases and 750,000 lines of code to get here: the world's first production-grade Open Source system for trading and risk analytics and data management. From a modern, modular analytics and pricing library, to a near-real-time streaming calculation engine, to data management, to client tools, we know that the OpenGamma Platform contains everything you need as the basis for custom installations. What took us so long? We released 0.9 when we thought that the code was feature-rich enough that it showed what we were capable of, and was an excellent starting point for evaluation efforts. We also said that we needed to wait to officially call something as 1.0 until we were confident that the system would support your use 24/7/365. Since then we’ve had thousands of downloads by people evaluating and using the system, providing us with extensive feedback. We're also in use in anger every day by some of the largest and most sophisticated hedge funds in the world. How do we know we’re ready? They told us we’re ready. 1.0 is battle tested and ready for use in production today. What's next? Unlike other vendors, we're not shy about telling you what we're working on. We've got our roadmap on our website so that you know where we’re going and roughly when we'll get there. However, our roadmap can really be summed up in a few key points: While we already have extensive asset class coverage (PDF summary / full analytics documentation), we're going to improve it. You've told us that while you can already incorporate 3rd party models easily, you don’t want to have to do it (or go to a traditional black-box analytics library vendor). While many of you are already comfortable and familiar with building your own end-user tools, you want us to do more out of the box. So we'll be enhancing the end-user tools available to both Open Source users and commercial customers and continue to push our already industry-leading integrations with systems like Excel and R. While we already have a pretty impressive set of data integrations, we want to make sure you can download the system and have live risk on your portfolios using your existing data as fast as possible. Unlike legacy vendors, we don't make money by dragging on installations as long as possible to sell consulting services: we want you up, running, and in production as fast as possible. But more than just categories of features, our roadmap is simple: Do whatever it takes to make our customers successful. Whether you're a developer at a hedge fund, an academic quant at a university, or a risk manager at a bank, we want to hear from you. Download the code, read the docs, or contact us for a conversation and demonstration. If you're tired of building and maintaining way more code than you need to, or tired of the way legacy vendors have been treating you, you’ve found the right people. [Less]
Posted almost 13 years ago
A few years ago, John Resig wrote a blog post about partial application in JavaScript and proposed extending the Function prototype with a new method: partial. This technique is quite powerful, as it allows for much more concise and expressive code. ... [More] We use partial application extensively in our UI codebase, particularly in our wrapper for OpenGamma's REST API, where it has saved us several hundred lines of code. Fewer lines of code often means fewer bugs. And this particular technique cuts down on repeated code and copy/paste errors. The technique takes advantage of the functional nature of JavaScript (treating functions as first-class objects) and allows information to be encapsulated inside a function. It returns a function with some of its arguments pre-populated. It uses JavaScript's prototypal inheritance to extend all instances of functions: The problem with this is that the args array gets cached. Here is the example use case from the original blog post: But suppose that function is used again. The results are unexpected: What's happened is that the first function has been cached inside the args array and all subsequent calls to delay are being pre-populated with that first function. Additionally, JavaScript Function instances can have arbitrary properties attached to them, but this particular implementation of Function.prototype.partial will lose the references to any additional properties that may have been added to a Function instance. Even though extending Function instances with custom properties is fairly uncommon, it is nonetheless a feature of the language that partial should retain. If a function is being pre-populated, the resultant function should contain references to all the original function's properties. So, with those two concerns in mind, here's our implementation of partial application: If we run the same test case, we get better results this time: Extending native objects in JavaScript is a controversial subject. The biggest philosophical difference between libraries like jQuery and Prototype is that jQuery does not extend native objects. We did not choose to extend the Function prototype lightly; there are a very small number of cases where we have elected to extend native objects. But in this particular case, the gains in expressiveness and the ability to write less (and more concise) code are compelling. [Less]