Tuesday, February 28, 2006

Case Study: Distributed Computing in Financial Services

We've got a new case study up over at the Digipede site, and it's one of the most exciting customer stories we have.

The customer is a large financial services company (who, being very secretive about the methods of their success, prefers not to be named). Nameless, but big: they manage scores of billions of dollars.

They do a lot of analysis, naturally--minimizing value at risk, satisfying portfolio constraints, and managing transaction costs.

They have a great .NET development team, and they had been developing their own distribution system to farm all of this work out across 20 servers. They made what is a common discovery: they were spending more time writing the code to do the distribution than they were working on their own algorithms. They looked around, and they found the Digipede Network.

Within a week they had a proof-of-concept implementation, and within two weeks of that they were ready for production. That was my favorite part--their lawyers took longer to approve the sale than their programmers did to port their software!

I'm not bringing this up just to brag about a great product. It really speaks to the core of how important a good development framework is. By allowing their .NET and COM developers to take advantage of many machines without rearchitecting their solution, the Digipede Framework simply removed distribution as an obstacle. They are now using the Digipede Network for a variety of applications--everything from C++ to VB to C#, using both COM and .NET.

By bringing in a third party tool, they now have a distribution system that's more scalable, more powerful, and, most importantly, lets them concentrate on their core competency.

This is a scenario I hear about all the time. Developers need to scale an application (maybe it's a web application, maybe a 3-tier application) so they start down a path of multi-threading or .NET remoting (or DCOM or COM+). But soon they realize that the distribution part is hard (especially if they want robustness, good monitoring and reporting, guaranteed quality-of-service, etc.).

Monday, February 27, 2006

Windows Grid: SDForum Talk Wednesday Night

For those of you in the Bay Area: I'm giving a talk at the SDForum Windows SIG in Palo Alto on Wednesday night.

The content will be similar to the talk I gave at Code Camp Seattle, but with a longer slot, I'm going to be able to go into more detail.

I'll give the basics of Object Oriented Programming for Grid, and I'll build some applications from scratch. I'll then grid-enable an existing application or two, showing how you can take an app and "retrofit" it to run on the grid.

The last thing I'll do is a brand new demo (and one that I'm really excited about): I'm going to talk about grid and SOA, and why grid can help scale an SOA well. I'll also demonstrate a Web service application that's been grid enabled, and show how that improves its scalability.

The whole shebang starts Wednesday at 7:00 in Palo Alto; details available here.

Wednesday, February 22, 2006

Koppel and Nawrocki on Grids and SOA

As I sat on the couch for the second consecutive day fighting a miserable cold, two of my favorite blog postings of the year popped into my inbox (thanks, Attensa).

First was Muli Koppel's blog. Are you reading this yet? He writes what is positively the most intelligent blog I've ever read. He doesn't get the attention that some other bloggers do (because he doesn't write about Web 2.0) but believe it or not: there are really smart people writing things that aren't Web 2.0. Muli is one of them. Every post is well thought out and well written, and I read each several times. He has more content per word than anyone blogging.

Anyway, Muli's post today was called Meant for Each Other: Enterprise Architecture and SOA. He begins with the most practical definition for Enterprise Architecture that I've ever read:

Enterprise Architecture is an infrastructure and a set of Machines constructed in order to manage a chaotic, dynamic, unpredictable, complex, organic, prone to error, frustrating, Enterprise IT, which has to support an ever increasing, dynamic portfolio of products and services, through constant "ASAP, Now, Right-Away" modifications of business processes.
He goes on to talk about the discussions that are happening in the industry about EA and SOA, and he sums it up thusly:
EA will shake off its image as a "documentation, procedures and guidelines" body, repositioning itself as a practical, implementation-oriented discipline aimed at the creation of an Enterprise Management Infrastructure, while SOA will be repositioned, no longer as an Integration/Interoperability architecture, but rather as an Enterprise Management architecture.
One of Muli's gifts (both as he blogs and, I surmise, in his career) is to make sense of the buzzwords of the day in practical terms. Seriously, stop reading here, and go read his post. Then come back.

Of course, now that you've read it, you know my favorite quote from the post. I'll quote it here:
There are, clearly, other infrastructures and machines to add to this picture. For instance, grid/utility computing is an essential part in the real-time Enterprise...
Ok, enough stealing Muli's material. The other post that fell into my inbox today was from Greg Nawrocki's Grid Meter: today's SOA developers .... tomorrow's Grid developers.

Greg wants to know why, while SOA and grid have been hyped for a few years now, SOA is gaining much more traction.

As reasons for SOA's success, he notes that there are a wide range of languages available (I was glad to see him include the .NET languages along with Java), but notes that the real reason is "that you can actually take relatively mainstream enterprise applications and write to a service with a simple API." He's right of course. While as many people (including Muli have pointed out), SOA ≠ Web services, SOA is certainly enabled by web services.

Greg notes that the Globus Tookit 4.0 embraces WSRF and he points to an IBM paper that details building a grid and implementing Globus.

Greg is right on in noting that there is a real convergence between grid and SOA. SOA cries out for a scalable architecture, and a Web service enabled grid is perfect for building that architecture. But as long as an implementation requires a lengthy consulting contract (which, after all, is why IBM is writing papers about it), grid can't really take off.

People need tools that are easier to use and grid systems that are easier to implement. Globus is the pre-eminent standard toolset for building a multi-organizational grid, but its implementation is too difficult for a small to medium sized venture to undertake.

In the meantime, of course, there are vendors who are making the tools to make these types of architectures available today. (We're not the only ones, but we're the only ones doing it on .NET). Anyone who wants a grid right now (not in the future) should be looking to these toolsets to help them build that grid today.

Technorati tags: , , SOA, globus, wsrf

Friday, February 17, 2006

Tame Your Technology

I was interviewed this week by Neal Miller, host of Taming Technology. Taming Technology is a radio show that tells its listeners "how to survive and succeed in the high tech jungle."

Neal is quite an affable guy, and he and I spent about 45 minutes discussing grid computing (and, of course, I got in a plug or two for the Digipede Network). The topic for the episode was "Grid computing--what is it and how is it being used?"

If you'd like to listen, watch the Taming Technology website--it'll come up soon. Or you can listen to it when it first streams on the VoiceAmerica Business channel--tomorrow at noon, Pacific. (VoiceAmerica is "the industry leader in Internet talk radio").

It was my first long format interview; I think I was wordier than I should have been. I hope to do these more in the future, and I hope to get more concise when I do it. If you listen this weekend, let me know what you think!

Technorati tags: ,

Thursday, February 16, 2006

4th Story Gets Faster

If you read Bobsguide (the guide to software and technology in the finance industry), then you've already seen this: Digipede and 4th Story Join Forces to Grid-Enable Financial Software. But if you don't, I'll tell you a bit about it.

4th Story is a firm with deep knowledge in two areas: finance, and software development. Their products (all named after National Parks and Monuments) do incredible things for both buy-side and sell-side firms. They've got some huge clients (Barclays, for one), and they do amazing things with analytics.

They've developed their whole suite in .NET. As they've gotten into very complex algorithms (think genetic algorithms) and large amounts of historical data, they've found the need to take advantage of multiple machines in order to scale properly. Rather than write that portion themselves, they decided to plug into the Digipede Network. Without rearchitecting their solution, and by taking advantage of our OOP-G methodology (Object Oriented Programming over Grid), they were able to quickly modify their software to automatically take advantage of the Digipede Network. Their objects are now being distributed across the Digipede Network and executed in parallel.

The result? They've now got a very scalable solution. Their customers will be able to scale the 4th Story suite as much as they want--just by adding more machines to the grid.

4th Story will be the first ISV logo on the Digipede partner page--but we've got a few more deals in the works, so look for more soon!

Catching Up Is Hard to Do

You can tell I'm busy when the only post I can come up with in 3 days is a "check this out" post. Still, this one has some good reads. Check this out.

I just discovered Jeff Schneider's blog Service Oriented Enterprise. This guy knows his stuff. The first post I found (The Sheep that Shit Coleslaw is a great representation of system architectures using Legos as examples. Did I say "Legos?" Yes, I did. If you're a fan of SOA or toys with interlocking pieces, check it out. His points are not dissimilar from what Kim said the other day. But when you say it with Legos it's so much funnier! Jeff, consider me subscribed. And I've got a lot of catching up to do. Jeff's been blogging since September 23, 2001!

The other interesting find I had this week was Nicholas Carr's presentation on utility computing and open source. I hope I can find some time to write a bit more about it. In the meantime, check out his pdf here.

Monday, February 13, 2006

Digipede Webinar Tomorrow

If you're interested in seeing a webinar about the Digipede Network, I'm giving one tomorrow at 10:00am PST.

The presentation isn't given in "marketingese." I'm going to give an overview of how our distributed computing solution works, but then I'm going to dive into real code. I'll use Visual Studio 2005 and VSTO to take an Excel spreadsheet with .NET code running behind it and grid-enable the .NET code to run on the Digipede Network.

If you've been wondering what this Digipede thing is all about, this is your chance. It runs about 30 minutes long. Audio will be available via a call-in phone number.

Send me an e-mail (click the link at right) or fill out the form here.

Technorati tags: , ,

Sunday, February 12, 2006

GMail: It's a SaaS World After All

[Update: 2/13/2006 12:02 PST]: I just realized that Google's first implementation is at San José City College, not San Jose State. Updated accordingly.

Over at Expert Texture, Robert had a great post yesterday raising concerns about Google branding GMail (Corporate GMail). According to their post here, Google is going to start offering GMail to corporate customers--with their domain names. They're starting by taking over San José City College's e-mail.

Robert raises a couple of salient issues:

  • How will GMail isolate company data and ensure that it never gets shared with other customers?
  • How secure is the GMail database? Leaving aside the issues raised by the EFF yesterday, what are the security procedures in place at Google to ensure that customer data are secure?


  • Those are great questions. Few pieces of software contain as many corporate secrets as e-mail servers. In this day and age, virtually all communication goes through e-mail servers (much, much more gets done this way than through phones or IM)--and that means that corporate secrets are in that system.

    Certainly many small-to-medium companies don't worry so much about intellectual property and therefore won't worry so much about this. After all, if I were operating Dan's Frisbee and Waffle Shop, and corporate GMail would give me the ability to have e-mail addresses like dan@dansfrisbeeandwaffleshop.com without my investing any time, staff, or money in hardware or software, I may think that is terrific. Who cares if someone finds out that I'm introducing a new flavor of syrup next week?

    But if I'm a large corporation (and let's face it: by announcing the service with a 10,000+ account implementation, Google is is making it clear that they aren't aiming at the frisbee and waffle shops of the world), I have grave concerns about putting my most secret intellectual property in the hands of any other corporation; even if their motto is "Don't be evil."

    But in a larger sense, this same question is being faced all over the software industry as people create and offer Software as a Service. And in many cases, companies are finding ways to assure their clients that their secrets are safe. Look what Salesforce.com has been able to convince companies to do: hand over all of their customer and sales data. For many companies, those secrets are as valuable as the ingredients to their special sauce.

    SaaS companies are finding ways to convince customers that their data are safe (from both outside and inside vulnerabilities). But it will continue to be a major issue as SaaS becomes a more frequent business model. And, undoubtedly, just as we hear today about credit card companies accidentally exposing credit card numbers, in the not-to-distant future we'll here of some SaaS provider who inadvertently revealed the ingredients of the special sauce.

    So, SaaS developers, I give you this. I've already told you that you have to design your system to be scalable from the outset. Now: make sure you're thinking about security from the ground up, too.

    Technorati tags: , ,

    Wednesday, February 08, 2006

    Web Services + Grid = Crazy Delicious

    The other day I happened to glance at the March 2005 issue of Cluster World magazine that was sitting around in the office. One article in particular caught my attention (sorry I can't link to it; ClusterWorld.com's "Old Stuff" link is broken, but it's also available here in PDF format). Lee Liming of Argonne National Lab wrote it; it's called Web Services and the Grid.

    Lee's premise (and it's a good one) is that clusters and grids are a natural tool for developers of Web services.

    As clusters become increasingly essential elements of the Grid's physical fabric, Web services are becoming essential elements of the Grid's application development toolset. Given the importance of clusters and Web services to Grids, cluster owners and operators need to understand the implications of Web services on the applications that run on their clusters.
    He's right. Web services require scalability. One of the inherent truths of creating good web service applications is that you must plan for success: if you've written your software well, people will use it. You have to be prepared for that success and therefore prepared to scale your application. Scaling to a cluster is a perfectly good way to do that. Scaling to a grid is even better. Why? Because it's more cost effective. As you need to grow, adding processing power to your application is as simple as adding more nodes to your grid. Even if the grid is supplementing a dedicated cluster, it gives you the processing power you need at peak times without the expense of dedicated hardware that sits idle 95% of the time.
    Note that unless the Web service has been designed to use the back end nodes on the cluster, it will only use the cluster's head node. Web services that require significant processing power should be written to submit tasks to back end nodes via the cluster's scheduler or other tools. The application developer must explicitly implement this capability.
    This is an important point that many people miss or think that Network Load Balancing will handle automatically. In fact, NLB is great for serving web pages from multiple servers, but is not a good tool for handling compute-intensive process (for a list of disadvantages of NLB, see Brian Madden's article How to Configure Network Load Balancing).

    Back to Lee's point: if you want to take advantage of the power of a cluster or a grid, you need to do that yourself. Of course, I happen think that you should simply get yourself a great toolset that will handle the job submission, guarantee execution, monitor the processes, handle node failure, etc. That frees you up to concentrate on the functionality of your Web service, rather than the technical details of scalability.

    Lee sums it up like this:
    ...it seems likely that service-oriented applications—such as those based on Web services—will lead to significantly greater use of clusters (i.e., more business) than traditional, manually launched applications. Early efforts to gear clusters to become high-power hosting environments for these types of applications will position administrators well in a service-oriented era.

    Lee is obviously talking to vendors who offer hosting solutions. But his points are broader and apply to anyone who is designing and building Web services or SOA. If you're writing software that will be available as a service (Web service, SOA, SaaS--you choose your favorite), grid it up and let it fly.

    (And if you don't get the reference of the title of this post, you must not have seen the Chronic of Narnia yet, or don't understand the allure of Mr. Pibb and Red Vines).

    Thursday, February 02, 2006

    New Case Study!

    Over on the Digipede site there's a new case study about a Digipede customer: Trekk Cross-Media.

    Trekk is a forward-thinking marketing company with high-tech savvy. They came up with an interesting product--for their clients (large retail chains), they will create direct-mail packages that include maps from homeowners' houses directly to the nearest chain store. They wrote the application themselves, and their customers liked it a lot. In fact, too much. The product was a hit in test markets, and the customers wanted to take it nationwide.

    But Trekk's application could only produce 750 maps per hour. To handle large jobs would take days. They looked into a difficult and expensive option: converting their application to a multi-threaded architecture, then buying expensive new hardware.

    Luckily, before they got too far along that road, they found Digipede. Using a trial copy of the Digipede Network, the Digipede Workbench, and a command-line version of their product, they quickly determined that they could get the scalability they needed. They bought it. Using the Digipede Framework SDK, they modified their application to integrate directly with the Digipede Network; within a few days, they were in production.

    The case study has some nifty pictures and some quotes from Jeff Stewart of Trekk. You should check it out.

    What I love about this project was how quickly it all happened. Jeff attended a webinar and decided to try it out. Workbench let him distribute his command-line tool immediately. And after deciding to go with the Digipede Network, they were up and running within days--all without a single visit from Digipede. They did everything themselves. I like to think it's testament to the ease of use we bring to Windows distributed computing, but I'll have to give them credit for being savvy software developers as well.