0
I Use This!
Activity Not Available

News

Posted almost 14 years ago
I'm pleased to announce that after incubating the OpenGamma Platform internally for the last 18 months, and trialing it with a number of financial institutions, we've made our first Open Source release available to the general public. This is a huge ... [More] milestone for us as a company, and I'm glad we're able to share it with you. The logical disconnect of a company claiming that they're "an Open Source company", without having actually released the source code, wasn't lost on us. In fact, it became such a topic of conversation that I had to address it openly in our blog. At the same time, we needed to make sure that our first release had the breadth and depth of functionality to make sure that you could trust us to execute on our vision going forward. While there are limitations in functionality and quite a few rough edges in the 0.7.0 release, we think it's a compelling release that shows what we're capable of both now and in the future. This is also only the first step of a very long journey. The 0.7.0 release is only a developer preview. We have a lot more functionality to come, from enhancements to our user interface components (to put the best UI in the industry on top of the best platform in the industry), to support for additional asset classes (proving on our promise of a truly asset-class-neutral system), to scaling up to the largest workloads. More than anything else, putting 0.7.0 out in the public now is a chance to get you more involved in this process. Simply put, we want you, the OpenGamma developer and user community, to tell us what you want us to focus on. Our whole issue tracking system is online and available, so feel free to go in and vote for the features you most want to see addressed in the platform. So what are you waiting for? Head on over to our Developers page, read the release announcement, and have at it! [Less]
Posted almost 14 years ago
Today we've started releasing some more of the components (like the FudgeMsg project which was our first release) that we've used to build up the OpenGamma Platform before its full, Open Source, release. The next of these is RouteMap.js, a URL ... [More] mapping library for both client-side and server-side JavaScript programming. In the words of Afshin Darian, one of our employees and one of the initial authors of RouteMap: We needed a way to map URL fragments (window.location.hash values) to JavaScript functions without buying into a complete MVC framework. Our application uses our own light-weight module framework to load the functionality of different parts of the UI. The UI itself is fairly stateless, almost all state information (except authentication) is encapsulated within the URL. Because of this, we wanted the URL patterns to be fairly sophisticated, with both named and unnamed parameter swapping and wildcard values. We realized that we wouldn't have access to the HTML5 History API in all the browsers we support, so we wanted our library to be agnostic about where these URLs actually came from. For the time being, we bind onhashchange events to the router, but at some later date, we can bind HTML5 history methods without touching the underlying code. Since our application is a single-page app and we don't need it to be indexed by search engines, we do not use the hashbang convention, but we recognized that others would, so our library allows for an arbitrary prefix to be configured into the routing rules. Since the functionality of the library is limited in its scope, it's not tied to being used exclusively within the browser. It can also be used in a server-side context like node.js as a very bare-bones web app framework. The OpenGamma Platform requires a significant number of technologies be developed, from our Analytics library (with mathematical primitives and pricing libraries for a number of asset classes) through to our Excel Integration Module (which can actually be used to expose any arbitrary Java code to Excel worksheets, not just OpenGamma-developed technology), not to mention our full calculation engine in the middle. I'm glad we can start releasing components in the lowest level reasonable and as early as possible. If you're working with modern JavaScript-heavy web applications Fork it on GitHub today! [Less]
Posted almost 14 years ago
I've known about TestNG personally for some time, and use it in JSR-310/ThreeTen. However, recently we at OpenGamma decided to move the platform from JUnit 4 to TestNG 6.0.1, so it seemed like a good opportunity to document the experience. The ... [More] OpenGamma application stack is a broad platform for the financial risk analytics industry. In structure terms, OpenGamma contains many of the elements of a typical enterprise system - utility code, persistence in multiple databases, integration with message oriented middleware, a binary protocol (Fudge), authorization, a compute grid and business logic. Testing these different components is obviously a complex affair, with a combination of unit and integration tests being needed. For example, I've helped develop a number of REST-style data access objects, known as the "masters", which are coded in such a way that they can be used against multiple different databases. Our default database choice is Postgres, however we have also used HSQL, Apache Derby and Vertica, with a simple approach to add code to support others. Ensuring that the code is tested against each database requires the same, or very similar, tests to be run against each. The testing issues are compounded as we are coming up to the open source release of OpenGamma, where anyone will be able to take a look and see what we have been up to. There won't be an instance of Postgres or Vertica available for downloaders to connect to, thus some tests would fail (HSQL is a local database, so will work fine). However, what this means is that we need to have a greater configurability in the tests, allowing OpenGamma employees to run against Postgres and HSQL, while allowing open source downloaders to only run HSQL. The choice to use JUnit 4 as the test framework occurred before I joined. In many ways, JUnit is still the "standard" choice which is often taken without considering the alternative. However, OpenGamma has clearly reached the point now where TestNG makes sense. The main reason to use TestNG is the greater focus on broader testing, rather than just "unit" testing. This support is shown in parameterization (running tests based on external data), groups of tests (allowing a subset of all the tests to be run) and parallelization (to test concurrency and to reduce the length of testing). TestNG also uses the modern Apache License v2 rather than the outdated and slightly controversial CPL. Conversion To switch OpenGamma to TestNG involved converting all the existing test classes and integrating with the broader toolset. For the basic features, the two frameworks, JUnit 4 and TestNG, are relatively similar (JUnit 4 adopted the annotation approach from TestNG). However, there are subtleties in the conversion. This table includes the main annotations that a typical conversion will see. Use case JUnit 4 TestNG Comment Method is a test @Test @Test Simply change the import Class is a test N/A @Test We decided to stick with per method annotations in the end Test is disabled @Ignore @Test(enabled=false) @Ignore on the class disables all tests in the class, whereas @Test(enabled = false) on the class doesn't override @Test on a method An exception is expected @Test(expected=IOException.class) @Test(expectedExceptions=IOException.class) Simple spelling change Test timeout @Test(timeout=20000) @Test(timeOut=20000) Simple spelling change Run before/after each test method @Before/@After @BeforeMethod/@AfterMethod Clearer annotation name Run before/after each test method @BeforeClass/@AfterClass @BeforeClass/@AfterClass JUnit requires a static method, TestNG allows either a static or an instance method Run before/after each entire test suite N/A @BeforeSuite/@AfterSuite   With @Test, I decided after experimentation to stick with using it at the method level. This is because any @Test(enabled = false) at the class level does not override an @Test at the method level. This makes it quite hard to disable a whole class. The latest Eclipse plugin features an quick fix (Ctrl-F1) that can pull the @Test up to the class level or push it down to each method which is useful if you want to try both approaches. A key feature of TestNG for the conversion task is an Eclipse-based automatic conversion tool. You simply right-click on part of your codebase and let the tool do the work. During the process of converting OpenGamma, I worked with Cedric Beust to refine the tool. With the recent enhancements it successfully converted the vast majority of the OpenGamma codebase without issues. Obviously no automatic conversion tool is perfect, but this one is now good enough to be genuinely useful. Parameterization A key goal of the conversion was better support for parameterization, such as running different kinds of database tests. TestNG supports two types of parameterization - fixed values from the setup and dynamically created ones from code. File-based parameters can be supplied by name using @Parameters({"fooFile","barFile"}). The values are read from the file that controls the tests, typically testng.xml, but this may also be a YAML format file now. Code-based parameters are suppiled by data providers. A method, annotated with @DataProvider(name="foo"), is written that returns an Object[][]. This is then used by one or more test methods @Test(dataProvider="foo"). The data provider may be shared between multiple classes by making it a static method and using an additional atribute @Test(dataProvider="foo", dataProviderClass=org.baz.Bar.class). During the OpenGamma conversion and based on my suggestion, Cedric enhanced the parameterization support. The previous @Factory annotation was a little clumsy requiring extra classes and methods. However, @Factory can now be used on a constructor where it takes a data provider. Bringing these together, here is an example: public class CachingTest { // the data provider, returning an array of object arrays, // where the inner array is the arguments to a method call @DataProvider(name = "cacheHints") public static Object[][] data_cacheHints() { return new Object[][] { {CacheSelectHint.allPrivate()}, {CacheSelectHint.allShared()}, {CacheSelectHint.mixed()}, }; } // setup by @Factory, new instance created for each factory value private final CacheSelectHint _filter; // setup by @BeforeMethod before each test private WriteBehindViewComputationCache _cache; // a new instance is created for each value returned by the data provider @Factory(dataProvider = "cacheHints") public WriteBehindViewComputationCacheTest(final CacheSelectHint filter) { _filter = filter; } // this method is called before each test method @BeforeMethod public void init() { _cache = new WriteBehindViewComputationCache(_filter); } @Test public void getNew() { ... } @Test public void getExisting() { ... } } Thus, three instances of the CachingTest class are created, each with a different value of CachingSelectHint. The two test methods, each preceeded by a call to init() are called three times each, once for each instance of the class. That gives six invocations in total. This approach is useful when sharing a single larger object, such as a more heavyweight object, between multiple tests. Assertions A key area of difference between JUnit and TestNG is the assertions. JUnit specifies the expected value followed by the actual value, whereas this is reversed in TestNG. However, TestNG supplies a class called AssertJUnit where the methods match the order of JUnit. This class is used in the auto-conversion, and OpenGamma intends to continue writing assert statements this way around. Personally, I find the different ordering of the frameworks confusing. As I switch between projects I frequently get it wrong! Thus for OpenGamma it definitely made sense to keep the JUnit style, so other team members didn't have to change their test writing habits. Tool integration A further key feature of TestNG is the ability to integrate with other tools. TestNG itself supplies a full-featured Eclipse plugin, with test runner, quick fixes and the auto-conversion described above. TestNG also includes integration with Ant. The integration also includes the ability to generate an XML file in the same format as that which JUnit would produce. This can then be used as input into a variety of other tools, including the JUnit report runner (TestNG has its own HTML report as well). At OpenGamma, we successfully integrated TestNG with Ant, Bamboo and Clover. Advanced features Now that the conversion is complete, the whole OpenGamma team can start using the more advanced features and reap the benefits of the conversion. Tests can have dependencies. It is possible to setup the tests so that a failure in one test causes other tests to be skipped. This avoids having the tests take a long time to run when a key element has been broken. Tests can be grouped. The groups attribute allows a test to be allocated to a group, such as "fast" or "database". The configuration can choose which groups to run, allowing a subset of the full suite to be easily accessed. Dependencies between groups can be setup allowing a quick "smoke test" to be created that is run before the longer more complex tests. Tests can be run multiple times. The same test can be run multiple times, potentially in parallel. This can be linked to a success percentage, which could be used to handle a test occasionally failing due to a network issue. Summary The conversion from JUnit to TestNG was successful and relatively painless. This was greatly helped by the rapid response of Cedric to questions and issues that were raised. That assistance was vital to the conversion - thanks again Cedric! Hopefully all users of TestNG will benefit from the bug fixes and enhancements that were recently added. I certainly think the @Factory changes are very useful! I'd recommend other projects that have a need for large-scale testing beyond simple unit tests to consider whether switching to TestNG might be beneficial. And if you're starting a new project, perhaps you should consider using TestNG from the start! [Less]
Posted about 14 years ago
Current States of Dates and Times in Java I first came across JSR-310 when I was looking for alternatives to the built-in java.util.Date/Calendar classes. At the time I was fresh from the pain of having to implement and evolve a market risk and ... [More] analytics system that used those existing classes and was well aware of the problems. While plenty of other people have documented the problems with the existing framework, we should probably start by just reexamining the problems with the existing java.util.Date. The first problem is that Date isn't actually a Date! It's a Date & Time stored as the number of milliseconds since the 'epoch' (1st Jan 1970 UTC). The original Java 1.0 version of Date also allowed parsing of date strings and pulling out individual fields (e.g. hour, minute, etc). These are now deprecated because they were hopeless for internationalization - there are still a number of different calendrical models in use around the world, different definitions of which days are weekdays, different daylight savings systems and so on. To try to fix all this, Sun introduced the Calendar class. The idea with Calendar was that you could set up a Calendar with all the correct information for manipulating the Date objects you were dealing with – the TimeZone, the calendar system, and so on (in fact different calendar systems are different subclasses returned by a static factory, e.g. GregorianCalendar, JapaneseImperialCalendar,BuddhistCalendar). You'd then use methods on the Calendar to modify Date objects. This also caused a range of problems. From a data model point-of-view, there was still no concept of a time zone being linked directly to a Date – there's simply a Calendar-wide setting available, the default of which is based on the Locale. Furthermore, operations now take place within a mixed static/non-static state, which leads to the most serious flaw – it's not thread safe. Another thread can hijack your Calendar and reprogram it mid-calculation. Lastly there's the java.sql.Date train-wreck. This class is designed to support the database notion of a date and was bizarrely implemented as a subclass of java.util.Date. However, instead of extending the methods of Date, it actually restricts the use of the time fields and throws an IllegalArgumentException if you call them. Note the irony of a method without arguments that throws an IllegalArgumentException. Thus, while it does at least represent dates rather than date/times, it carries all the problems of Date and additionally fails to address the multiple date representations in the SQL standard. In addition, because internally the date is actually stored as a millisecond offset from the epoch, at midnight, if you pass one into a Calendar via setTime(), it will not necessarily behave as you expect. When adjustments are made that cross a daylight savings barrier, that value of midnight may roll back into the previous day, which is almost certainly not what you want! So, with the current model as it is in the JDK: There's no way to represent a date on its own. Should you store it as midnight on the date you want? What happens if daylight savings moves the time back before midnight and wraps to the previous day? That probably isn't what you want! There's no way to represent a time on it's own. There's no way to specify whether a date time should respect changes in daylight savings in that time zone. If someone sets an alarm, does that mean they always want it to go off at 6am, no matter what the time zone or if they meant 6am UTC? This is the sort of thing that possibly caused the recent iPhone DST alarm bug. Manipulating Dates is very cumbersome and very difficult to do in a thread-safe way. There is no easy way to change the clock for unit testing without affecting the whole local machine's environment (changing the hardware clock is almost always a really bad idea for all sorts of reasons). The solution: JSR-310 One of the original suggestions for how to improve the JDK's implementation was to adopt an alternative Date/Time library that was generally considered a mature and stable alternative and at the very least a significant improvement over the status quo: JodaTime. However, when this idea was put to the author, Stephen Colebourne, he rejected it citing several structural issues with the design of the JodaTime API that would make it difficult to lift to the status of an official Java API. Instead he suggested designing a new API from scratch, taking into account the lessons of JodaTime and carefully integrating into the JDK in a way that would deal with the legacy classes. I started to use JSR-310 towards the end of 2008, and at the time, it was fairly complete from a functionality perspective, but the API hadn't really matured through use and it was only available via subversion, which I feel limits access. My initial reaction was very positive but I did have trouble deciding which classes and interfaces I should use, largely because at the time there was no tutorial beyond what was contained in the various presentation slide decks that had been given at JavaOne. Over the next year and a half, as I became more comfortable with the API, I started to try it our in OpenGamma's new codebase. Then suddenly, in one of life's more strange twists of fate, Stephen Colebourne's CV came across our desks at OpenGamma and we quickly snapped him up to join our team, with the understanding that Stephen would continue to spend time on JSR-310 and push it forward towards eventual inclusion in a future JDK. Since he's joined, we've wholeheartedly adopted JSR-310 and honestly haven't looked back. While early on we discovered a few pretty minor issues — for example there were problems with the speed of loading the TimeZone dataset, this was quickly remedied by an aggressive rewrite and optimization of that code by Stephen. The only other issue was a problem with getting the default TimeZone on Linux, which again was quickly fixed. Since then, the API has stabilized (although there is still the odd change occasionally), and I'd have no hesitation in recommending it for general production use. Below is a general introduction to the library. There's a lot of richness in the API that doesn't come across in the example, so dig in and start using it! General style It's useful before diving in just to comment on the general style of the library – it is very much 'modern Java'. There is heavy use of static factory methods as an alternative to constructors — particularly look out for the use of the of() method, which is a shorthand of the commonly used valueOf() method name popularized by its use in EnumSet. There's also plenty of with(…) methods when augmenting a type with extra information to form another type, and also toX(…) methods for straight up conversions. We've found that generally these conventions work very well and lead to very readable code. Human Time If you're ever dealing with how humans deal with time, you will need to use the human date/time classes in JSR-310. Below is a table, which should give you a much better idea of which class you should choose when using JSR-310. Rather than have a single concept, JSR-310 has many different representations of dates and times, depending on how much information is available. At first it seems a little overwhelming to see so many options, but trust me, they all have a use, and there are some rules if you don't know which to choose. Class Use case LocalTime You just want to store a time of day without reference to a particular time zone: e.g. you always want the alarm to go off at 11am. This class corresponds to an SQL TIME and an XML Schema xs:time if no 'timezone' (actually an offset) is specified. LocalDate This refers just to a particular day in a calendar. There is no time information implied whatsoever. E.g. a birthday. This class corresponds to an SQL DATE and an XML Schema xs:date if no 'timezone' (actually an offset) is specified. LocalDateTime A combination of the above two — it contains no information about the time zone, and it's assumed the time is relative to the local time zone. This class corresponds to an SQL TIMESTAMP and an XML Schema xs:dateTime if no 'timezone' (actually an offset) is specified. OffsetTime When you want to store a time with a hardcoded offset from UTC. This is useful when you're storing points in time, but aren't intending to do any date-related calculations with them. This class corresponds to the SQL TIME WITH TIMEZONE concept and XML Schema xs:time with 'timezone' (actually an offset). OffsetDate A date containing a hardcoded offset from UTC. This allows it to be combined with a LocalTime to form an OffsetDateTime. This class corresponds to an XML Schema xs:date with 'timezone' (actually an offset). OffsetDateTime A date and time with a hardcoded offset from UTC. This allows us to uniquely identify a moment in time in any given locale. Any date adjustments (e.g. add three months) and will not correctly handle the Timezone though. This class corresponds to the SQL TIMESTAMP WITH TIME ZONE and to an XML Schema xs:dateTime with 'timezone' (actually an offset). ZonedDateTime A date and time with a time zone. This allows us to uniquely identify a moment in time in any given country AND allow us to perform accurate date adjustment calculations. Given these basic classes, there's a well-defined relationship between them, and knowing what you need is essential to get the best out of the library LocalTime + LocalDate = LocalDateTime LocalTime + LocalDate + ZoneOffset = OffsetDateTime LocalTime + LocalDate + TimeZone + [ZoneResolver] = ZonedDateTime OffsetTime + LocalDate = OffsetDateTime OffsetTime + LocalDate + TimeZone = ZonedDateTime LocalTime + OffsetDate = OffsetDateTime LocalTime + OffsetDate + TimeZone = ZonedDateTime LocalDateTime + ZoneOffset = OffsetDateTime LocalDateTime + TimeZone = ZonedDateTime OffsetDateTime + TimeZone = ZonedDateTime Note that the optional ZoneResolver is used to resolve the case when the LocalTime falls on a TimeZone DST boundary. By default, if this is not supplied, and exception may be thrown (although this may change) in this case. There are alternative, more lenient ZoneResolvers that can be supplied for alternative behaviours. Out of all these classes I've found the following rules of thumb help if you're not sure: If you're just wanting to store a time-of-day that's locally relative e.g. the time your local coffee shop opens, use LocalTime. If you're just wanting to store something to date resolution, use a LocalDate. If you want to store a date/time, and don't understand all the options, just use ZonedDateTime; it gives you most flexibility. Periods In addition to absolute human time, JSR-310 provides for the concept of relative time, modeled using the Period class. This represents a number of seconds, minutes, hours, days, months or years (for example). The important thing to realize here is that a concept like '1 month' will have a different length in days depending on what it is relative to and there is a Strategy pattern in the implementing classes for choosing the correct course of action. Adding one month to 20th February will make it 20th March (a difference of either 28 or 29 days depending on if it's a leap year), whereas adding one month to 20th March will leap forward 31 days to 20th April. It gets more interesting when you consider going from e.g. 30th January plus one month. There is no 30th February, so what happens? Actually we end up at the end of the month, so 30th January plus one month equals 28th February (or 29th in a leap year). This is usually the behavior that you actually want, and if you want a month to mean a set number of days, you can always add a fixed number of days instead. Periods can be most easily created with the ofDays(), ofMonths(), ofYears() static factory methods. Machine Time In addition to the 'human time' classes that deal with all the complexities of human calendars, there is a machine-based time system, based on the Instant class. This is really just a thin wrapper around a 'nanoseconds of time since the epoch' concept. One thing that can be confusing is that not all the *DateTime classes allow you to convert them to Instants (they don't all implement the InstantProvider interface). This is because not all the human date/time classes contain enough information to actually specify a unique moment in time measured from UTC. For example, if I say LocalDateTime.of(2010, MonthOfYear.DECEMEBER, 11, 13, 21) There is no information about where in the world we are talking about. 13:21 on 11-Dec-2010 in New Zealand happens about half a day before it happens in the UK. But if I say: OffsetDateTime.of(2010, MonthOfYear.DECEMBER, 11, 13, 21, ZoneOffset.of("+0100")) Then I provide enough information to know the relation to UTC (+1 hour in this case) and I now have a .toInstant() method available. Additionally, you need to be able to provide the extra information, such as a TimeZone object to be able to convert from Instants into, for example,ZonedDateTime. The other main 'machine time' class is Duration, which is simply a period of time between two Instants. You can use these in arithmetic as you would expect e.g. Instant + Duration = Instant Duration + Duration = Duration Clocks A very well thought out aspect of JSR-310 is the way that Clocks are handled. In most Date/Time APIs, Clocks are treated as a global, immutable resource that can be sampled. By providing the abstract class Clock, we can now have clocks that both act as normal sources of time, or those that provide specific behavior, which is very useful in unit testing for example. The technique of providing an abstraction for the clock and passing it into the class that uses it is known as clock injection. However, apart from these special cases, how do you use Clock? The easiest way is to use the static factory methods on Clock to obtain an instance. Usually you'll want one using the local time zone, in which case you should use: Clock.systemDefaultZone() Hence, the one way to create a ZonedDateTime for 'now' in the current time zone is: ZonedDateTime.now(Clock.systemDefaultZone()) Although there is also a shorthand version: ZonedDateTime.now() Similarly, if you need the local TimeZone object, which, for example, is necessary, the easiest way is to get that from an instance of Clock too: Clock.systemDefaultZone().getZone() Although you may want to cache the Clock object returned from systemDefaultZone() depending on the situation. Parsing dates Parsing dates is only tricky in that it's not necessarily that easy to find the appropriate classes and methods. Once you know how to do it though, it's very easy. Our first, rather painful, approach was to use Rules, which define how fields should appear in the various calendar systems: DateTimeFormatter formatter = new DateTimeFormatterBuilder() .appendValue(ISOChronology.yearRule(), 4, 10, SignStyle.EXCEEDS_PAD) .appendValue(ISOChronology.monthOfYearRule(), 2) .appendValue(ISOChronology.dayOfMonthRule(), 2) .toFormatter(); but we later discovered a much simpler approach, which is to use patterns: DateTimeFormatter formatter = new DateTimeFormatterBuilder() .appendPattern("yyyyMMdd").toFormatter(); and then finally the simplest: DateTimeFormatter formatter = DateTimeFormatters.pattern("yyyyMMdd"); Either of these formatters can then be used then to print dates and to parse them. LocalDate twelthOfDecemenber2010 = formatter.parse("20101208", LocalDate.rule()); assertEquals(formatter.print(twelthOfDecember), "20101208"); or more succinctly: LocalDate twelthOfDecember2010 = LocalDate.parse("20101208", formatter); assertEquals(twelthOfDecember.toString(formatter), "20101208"); The passing of the rule() for LocalDate in the first example is necessary to specify what type you're expecting. If you don't know, and don't pass it, you'll get a CalendricalMerger, which contains all the fields in the format. This means the parsing is separated from it's interpretation, which is what the rule is for. To interpret the parsed data into a particular type, you have to pass in the appropriate Rule to get the object to try to make the data produce that type of object, discarding any extra information. Other highlights A particular delight was the discovery of the DateAdjuster interface, which allows you to write arbitrarily reusable classes that perform complex date adjustments that are common in finance, and the richness of the API allows you to perform those adjustments with unprecedented accuracy and efficiency. Be careful not to put stupid adjusters in that just loop over a date, incrementing it by one day and testing a condition though as performance will be terrible. Here's an example of how we used DateAdjuster to move a date into the next quarter: We then use this class in our NextExpiryAdjuster, which computes the next interest rate future expiry date after the date supplied (interest rate futures always expire on the third Wednesday of the next quarter: Where to get it? You can find JSR-310 at http://threeten.sourceforge.net/. From there you can access both the main TRUNK of the subversion repository, and also binary jar files, a user guide and a couple of slide decks. Go for it! [Less]
Posted about 14 years ago
Today, I'm pleased to announce that in December 2010, OpenGamma closed its second round of funding: a Series B investment led by FirstMark Capital, based in New York City. While there's been a lot of press coming out this week, including our joint ... [More] press release with FirstMark, I wanted to give a little more color into why we raised this round, what we're going to do with the additional financing, and why we're so excited to be working with FirstMark for the next phase in OpenGamma's life. When we raised our first round of funding with Accel in August, 2009, our initial goal was to build the best, and hopefully last, platform for performing and managing the computations so key to all modern finance. Due to the sheer breadth of what we were attempting to build, we knew that this would take quite some time to get to what startup veterans are now calling the Minimum Viable Product. Hence, we went dark to build it. When we finally launched our first real website and told the world what we were doing in July of 2010, we immediately had a flood of interest from people interested in the OpenGamma Platform and what it could do for them. We started working with the first customers in our Early Adopter Program, refined the system, expanded what it was capable of, and put it in the hands of real traders and risk managers. What these users were telling us was clear: they wanted to aggressively exploit what we had built beyond our initial expectations, and we needed to grow to support their needs. That led to two requirements: We needed additional capital to make sure that every developer and end-user of OpenGamma technologies was as successful as they could possibly be; and We needed to expand our geographical footprint to satisfy the inherently global nature of our customer base. London and New York are, while competing for the title of financial capital of the world, almost better thought of as sister cities. We speak the same language, we share similar legal and regulatory structures, and we're actually almost the same distance from each other as New York is from Los Angeles. In fact, while I never really went to New York when I lived in San Francisco, as soon as I moved to London and started working in finance I was flying over to Manhattan on an extremely regular basis. So it was pretty obvious that we would need a presence in New York before long. But just as much as the general similarities and sympathies the two cities have (they don't call it NyLon for nothing), customers were telling us they needed us to have a presence in New York. Many large American and European banks split their senior management between the two cities; many hedge funds have desks in London (or Switzerland) and New York (or Connecticut). People were telling us they needed to make sure that OpenGamma was able to support them no matter what side of the pond their developers, traders, and risk managers were located. So once we had decided that we needed capital, and needed to open a New York office, the next logical step was to raise that capital from a source that understood our business, and had a deep and strong connection with New York City. We found that source in Lawrence Lenihan from FirstMark Capital. Lawrence brings to the OpenGamma board a wealth of experience in technology investing, and a fantastic understanding of the financial services industry and how revolutionary, disruptive technology like OpenGamma can change and improve how the industry operates. His strong connection to New York City, its people and industries, can only benefit us as we expand out of our London home (which will remain our global headquarters). We'll be putting FirstMark's (and Accel's, who continued their participation in this round) money to hard work. We're building out our commercial operations team (so that we have the capacity to actively educate the market and support customers and users), we're opening a New York City office (to make sure that we can support customers in both of the world's twin financial services capitals), we're continuing our investment in research and development (to make sure the OpenGamma Platform continues to develop into the single best system you can base your risk and analytics infrastructure on). In short, we've got our work cut out for us, but with FirstMark and Accel behind us, I'm confident we'll succeed in our mission of changing the way people deliver analytics to financial services users. I'm extremely excited about OpenGamma turning this corner in our history, and I look forward to sharing more as we open our office, continue our Early Adopter installations, and do our first Open Source release later this year! [Less]
Posted about 14 years ago
Usually, I like to let small errors by the media go; life is too short to try to chase that every single fact is correct when the press covers you. That being said, today's report by VentureBeat on our funding had a few things that I definitely ... [More] wanted to address. As VentureBeat moderates comments and hasn't approved mine, this is the best way to address those. We Don't Have A Product Yet? This one is the one that I see people surprised about on Twitter the most. The simple fact is that we do have a product (an extremely good one in the OpenGamma Platform); we're in beta in one site already and going into beta in another extremely shortly. What we don't have is a general available release. I've blogged about this in the past (see: Why We Haven't Launched Yet), and the same things are true now that were true then. When the first Open Source release comes out, we want to make sure that everybody in the world has the confidence that it's production quality (by actually being in production), and has the confidence that the integration points are stable (by having them be used for integration in a real-world environment). This is the entire point of our Early Adopter Program, a process that has been working extremely well for us. OpenGamma is A Souped-Up Excel? Yes and no. It's true that we've built some of the best Excel integration in the industry, so that every single component in an OpenGamma installation can be easily accessed by Excel. It's true that you can build extremely powerful sheet-based solutions using the OpenGamma Excel Integration, which have your entire distributed computational environment backing those sheets. We're extremely proud of what we've done there, and when traders and risk managers see it, they get extremely excited about the possibilities that it offers them. But that's just part of the story. Excel is one client; we're also building a full-scale web application that will be instantly familiar and comfortable for people who are used to using sites like Facebook, Twitter, and Mint (rather than what passes for a user interface in much of Financial Technology). In addition, we have a comprehensive suite of RESTful, MOM-based, and code-level APIs for accessing every part of the OpenGamma Platform in custom environments. We're constantly evaluating our integration options so that we can put the power of the OpenGamma Platform into as many of the tools that financial professionals use on a daily basis. And the power, I believe, of our Excel integration, comes from its integration with everything else in the platform. A trader can frame a calculation in Excel and instantly share it (via the web interface, or any other access mechanism) with his colleages in sales, trading, or risk management. A pure Excel solution doesn't give you that power, and that's the power people have told us they want from a risk and analytics infrastructure. Reliable and Slow Moving Trumps State-of-the-Art? Personally, I don't believe this to be the case. I believe that the Credit Crunch proved to financial industry professionals that siloed systems separating risk and front office trading, or even desks from each other, aren't suitable for the fast paced modern capital markets. I believe that risk managers know that they need to move from a batch and overnight world into a near-real-time, event-driven world. I believe that firms know that every dollar they spend on maintaining infrastructure that they don't derive proprietary value from is, fundamentally, a dollar they shouldn't have to spend. But luckily you don't have to take my word for it. I believe it because that's what financial institutions are telling us, on a near daily basis. If you're interested in finding out more about what OpenGamma can do for you before our GA launch, please feel free to get in contact. [Less]
Posted over 14 years ago
It's that time again: time to open our doors, lay out a full spread of food and drinks, fire up the demo servers, and show anybody interested what we're doing! So on November 17, 2010, we're hosting the second OpenGamma OpenHouse! A lot has happened ... [More] since the last OpenGamma OpenHouse: Our web site has launched, telling the world what it is we actually do. We've had a lot of people let us know that they're interested in finding out more details about our technology and the team without being part of our Early Adopter Program Our technology has developed by leaps and bounds. The team has grown significantly. We decided that with daylight getting shorter and shorter here in London, it would be a great time to invite people back to Kirkaldy House to meet the team, see the platform in action, and ask any questions they might have! You can just come by our offices in Southwark after work on Wednesday, 17 November, but we'd like you to register on Eventbrite (Password: ValueAtRisk) so that we have a rough idea of who's coming, how much food to order, and, perhaps most importantly, how many beers to have ice cold! We hope to see as many of you from the London area as can make it two weeks from tomorrow! [Less]
Posted over 14 years ago
One of the most common questions about the OpenGamma Open Source strategy has to do with the "commercial components" that we allude to in some of our web site. That obviously implies some degree of Open Core licensing, but what do we mean by that? ... [More] I'm studiously going to avoid any of the religious arguments in favor or opposed to an Open Core licensing strategy, and just talk about how we arrived at this decision, and what it means for customers. What IS Open Core If you're not a regular part of the Open Source commercial community (which I expect many blog readers aren't), you might not be familiar with the term Open Core. Essentially, Open Core means: There is a core product which is released under an Open Source license; and The primary vendor/author of that product has additional components, modules, or functionality which are only available under a proprietary license. There are a number of reasons why vendors who produce primarily Open Source products will pursue an Open Core strategy, and there's a lot of debate about whether a company pursuing such a business strategy is staying true to Open Source principles. Again, I'm going to ignore all that, and just talk about OpenGamma. The Trade Secret Conundrum The OpenGamma Platform is designed for integration: integration with bespoke software you've written; integration with proprietary vendor systems; integration with hardware and software infrastructure; integration with third-party services. Where those services aren't bespoke, it's a useful thing to be able to leverage a single best-of-breed integration module rather than having every customer write their own. The problem comes in when you consider that financial technology (pre-OpenGamma of course) is a minefield of proprietary technologies, many of which are subject to Trade Secret and Confidentiality agreements. While you may argue that the firms that make these systems should quit being so closed about their APIs, changing these firms will take a while (if they'll ever open up), and in the meantime, the APIs are all proprietary. What are we talking about in particular? What types of components might you expect to have restrictive enough licenses that OpenGamma would be forced to release its integration under a proprietary license? Market data providers like the Bloomberg Server API and Reuters Market Data Services. Trading systems for loading position and security data. Reference and golden copy database systems. So where does that leave Open Source solutions? Unfortunately, going nowhere. While OpenGamma can join developer programs and get access to the APIs, we can't actually release any of the code that we write against the APIs under any public release whatsoever (whether Open Source or not). To provide these modules to customers for pre-built integration options, we must release them under proprietary licenses. Our Commercial Fairness Principle There's another category of features and modules that we will probably release as proprietary components, and that's covered by what I call our Commercial Fairness Principle. In essence, this boils down to one statement: if a component only exists to facilitate use of an expensive commercial service, it's only fair for OpenGamma to get revenue for the integrating component. There are a number of examples of technologies and services in the Financial Technology space which may have open APIs, but which cannot be applied except with licenses that cost a lot of money. They simply aren't useful on their own, so firms that would make use of those technologies have already made a significant commitment to financially supporting those technologies. If OpenGamma produces a component whose exclusive use is to facilitate use of a commercial service, we may make that a proprietary component. We certainly won't do it for everything, but we reserve the right to do so. A Solution True To Our Beliefs However, a big part of the OpenGamma vision is to enable developers in the financial services industry by giving them access to the source code that they need to do their job. In addition, we believe that we want to have a positive, collaborative relationship with our customers and partners. How can we square that away with our commercial needs and legal requirements to not ship the source code for certain components under an Open Source license? We've come up with what we believe is a reasonable solution to this. OpenGamma will give perpetual, royalty-free source code licenses to customers of our proprietary components. What that means to you is that if you sign up as a customer of one of these components, you have the source code for your whole OpenGamma installation. Just as importantly, if you choose to end your commercial relationship with OpenGamma, you can continue to use the components, and extend and self-support the components through the source code you received. We think this policy satisfies both of the commitments that we're making to our customers and partners: developers will have all the tools that they need to do their jobs (including source code that they can have immediately available when they need it, and not locked away in an escrow system), and customers and partners will be free to choose to maintain a commercial relationship with OpenGamma or not. Our Commitment To A Usable Core Where does that leave the open core part of the architecture? Will it be usable, or will it be artificially limited? We believe that Open Core only works where the Core is usable, and where the extra components really are specialized, or have legal obligations not to open the source. Hopefully my clarification on what will guide our decision making process for whether something will be a proprietary component or not makes this obvious, but let me be extremely clear. The Open Source parts of the comprehensive OpenGamma offering will be sufficient in quality, scope, and functionality for production use. [Less]
Posted over 14 years ago
Earlier this week I sat down for an interview with Finextra, which they filmed and put on their web site. We had a chance to talk about how we view OpenGamma as a common software infrastructure stack (rather than necessarily a common hardware ... [More] infrastructure task), changes to how firms are viewing their analytics and risk needs post-Credit Crunch, and what customers can expect from our Early Adopter Program. However, all Finextra videos are done in WMV format, which many people have told me that they can't view easily. With their permission, I've embedded the higher quality flash version below. [Less]
Posted over 14 years ago
The Open in the name OpenGamma stands for two primary things: the Open Architecture that we've built our platform around, and the Open Source way that we're delivering our primary platform. Since launching our new website, we've had a number of ... [More] questions on our Open Source strategy that I wanted to address. There will follow several similar blog posts on this theme, but I wanted to start with the one that's attracted the most attention. Why Haven't You Launched Yet? The conventional wisdom these days when doing anything Open Source is "release early and often." Even if you don't think the software is ready, even if there are still bugs and problems and features missing, still release it and see what type of feedback you get. This is definitely the standard state of play, and we've used it in the Fudge Messaging project that OpenGamma has sponsored since our inception. But we don't think that it was a candidate for the OpenGamma Platform. The first reason is an obvious one, and I alluded to it in my first corporate blog post: we want the first release to be production-ready. And while we can follow an extremely rigorous development, test, and QA process here internally, because our platform is designed for integration, the only way to make sure that the initial release has APIs that are suitable for a variety of customers is to have a variety of customers (of different sizes and needs) attempt to integrate it into their environments. We want not only the platform to be production quality, but we want our first set of APIs to be stable enough that you can feel confidence integrating against them. The second is not quite as obvious: we need to get under commercial confidentiality arrangements in order to get the level of feedback that we need to make sure that we've hit point number one (have a production-quality first release). Financial institutions are notoriously disclosure averse, for a wide variety of reasons: They may have proprietary strategies and algorithms that even relatively minor code or infrastructure disclosure may leak, giving competitors an advantage in the market; They may have regulators who want to make sure that confidential client/customer/counterparty data isn't inadvertently leaked to the market; They may have systems and procedures that they know aren't optimal, and don't want people to know that they use. Enterprises in this type of situation (which pretty much covers every financial institution that is likely to be an early adopter of OpenGamma) require pretty serious non-disclosure and confidentiality agreements. Given that we need to be able to prove the platform at their sites, we need to be in a situation that we can sign those agreements, and our early adopters believe that we take them seriously. The primary way that we can do that is to enter into a commercial relationship with those firms, where confidentiality and non-disclosure is part of the terms of the relationship. Non-disclosures without commercial consideration can be difficult to enforce sometimes (or so my lawyers tell me); non-disclosures with commercial consideration are extremely enforceable. Finally, though, given that we want the initial release to be of production-quality and with a hardened API, and we need to sign commercial agreements to get the types of confidentiality agreements customers in the space require, there's one final bit that goes into our controlled release strategy. Access to our developers. One of the benefits of our Early Adopter Program is direct developer-to-developer connections. That means that if you're a member of our EAP, your developers working with OpenGamma get direct phone and email connections with the developers who wrote the features you're working with. If the problem requires on-site help, we'll put R&D group developers (not professional services staff) on your site for as long as it takes. Imagine we did an initial release before we launched the EAP. I know what that's like, and I particularly know what it's like to get so much community attention that you can't actually keep up with the community feedback and support requests. So by controlling access to the platform for the first few customers in our EAP, we can guarantee them that our developers will have the time and attention to focus on their unique needs, without having to fit them into a much larger support rotation. So to sum up why we're billing ourselves as an Open Source platform, but haven't yet made our first release: We need the first release to be production quality, which we can only guarantee by putting it into production; We need to have high confidence that the first versions of the APIs are stable and unlikely to change, which we can only do by seeing what users require from those APIs; In order to get those data points, we have to work within the confines of commercial relationships to get strong confidentiality agreements; The limits on the numbers of initial customers are to ensure that they get the types of strong, dedicated, developer-to-developer support that are essential to early adopters. [Less]