Friday, March 31, 2006

Cool Financial Applications on Windows

Here are some of the cool Microsoft Partners I met at the Financial Services Partners Summit. As usual, the best part of the summit was meeting other ISVs and find out what amazing things they're doing on the Microsoft Platform.


  • TAPSolutions: Their TAPMaster product acquires, stores, and manages market feed data.

  • Xenomorph: High performance database product, along with analytics and pricing tools to go along with it.

  • Eze Castle Software: Named for a real castle in France, Eze Castle has created a whole fleet of software for investment management firms.

  • ClusterSeven: They have tools and technology for managing, analysing and auditing Excel spreadsheets--a critical tool for compliance, among other uses.


  • Unrelated note: this is my 100th post at West Coast Grid. I've certainly enjoyed writing this blog so far, and I'm glad to get comments, questions and notes. I look forward to continuing this conversation!

    Thursday, March 30, 2006

    Microsoft Capital Markets Partner Summit

    John and I spent Wednesday in Redmond at the Microsoft Capital Markets Partner Summit: a group of ISVs who provide tools into the Capital Markets space, and the Microsoft team that supports them.

    There were some interesting companies there--I'll have another post on that later.

    It was very interesting to see how Microsoft is changing their approaches to serving their customers.

    On one hand, Microsoft doesn't sell very much direct to customers--indeed, something north of 95% of their revenue comes through partners.

    On the other hand, Microsoft has a vested interest in talking with customers, ensuring that the end customers understand both the value of the OS and the value of the tools available on that OS.

    That means that Microsoft has to work closely with their partners when talking to customers. The partners--ISVs--need to sell their products; but it's Microsoft's job to sell the platform.

    For years, when a large financial company wanted to buy a large system, they went to a consulting company--say, IBM. IBM would put together a team of companies: hardware, OS, middleware, software vendors, and consulting services. It was an easy way for the customer to buy: one company was championing the whole thing (IBM in this example, but there are others). Moreover, from the customer's perspective, there was one "throat to grab" if things went wrong.

    Microsoft wasn't selling that way at all. Microsoft got into financial companies the way they got into every other company on earth: selling operating systems for desktops and office applications. Then, in the late 90s and into the 21st century, a funny thing happened: Microsoft wrote a server operating system and a powerful database. Software vendors started writing high end packages on the platform.

    But there's a rub: how do you sell this? Microsoft didn't write applications for finance, nor did they employ experts in the field. So they didn't have an effective way to tell potential customers about the virtues of their platform. And the software vendors knew a lot about their area of expertise, but they weren't heavyweight enough to push platform decisions on their customers. (I ought to know--I was writing enterprise software built on NT, SQL Server and IIS back in 1997. The platform was a tough sell)

    In the last couple of years (especially in the last year), Microsoft has learned many lessons about selling into these types of enterprises. They have hired aggressively, bringing experts on staff who can speak the language of finance while preaching the benefits of the platform. They've also pursued the integrators: with Accenture, they created Avanade. They also work with many of the other big integrators.

    They're holding events like yesterday's Capital Markets Partner Summit to help create a sense of community among partners--partners who may bid against each other on some deals, but are even more likely to have complementary products in many instances.

    Finally, they're providing coordination as these efforts go forward. They can approach customers not piecemeal, but together: the platform, the software, and the integrators to make it work. And, by taking the lead, Microsoft is essentially offering up theirs as the "throat to grab."

    Microsoft is doing the exact same thing in other industries, too (banking, manufacturing, health care, etc). They are helping us (the ISVs) bring our products to the market.

    Most importantly, they are giving their customers a better, more complete picture of what they can have when they choose this platform: OS, database, software vendors, and integrators, all one one page.

    It was a great couple of days. Thanks to Ed Muth, Kenny McBride, Rich Feldmann, Stevan Vidich, Christina Fritsch, and everyone who gave the great sessions.

    Technorati tags: ,

    Tuesday, March 28, 2006

    In Redmond tonight...

    I'm in Redmond tonight and tomorrow for the Microsoft Capital Market Partner Summit--a gathering for partners with products and services in the financial realm.

    John and I are going to a reception tonight, then we'll be painting Redmond, um, red. Or going back to the hotel and working. We'll see what happens.

    We had a productive meeting with one finance ISV already this week--I hope this summit leads to more!

    Anyone recommend a dinner place in Redmond? Anyone in Redmond want to talk distributed computing? 510-816-7551.

    Technorati tags: ,

    Thursday, March 23, 2006

    What a week!

    It was a busy week in Digipede-land, and I haven't found any blogging time at all.

    We've been in final QA on our new release, Digipede Network 1.2. Most of the last few weeks has been install and upgrade QA. As it turns out, that's a bunch of work! The new version is fully compatible with both .NET 1.1 and .NET 2.0, and supports mixed-mode operation (some agents running .NET 1.1, some running .NET 2.0, and some running both, and all combinations thereof).

    The week was made all the more interesting because my co-worker Nathan's wife had a baby! Exciting for them, and Nathan has been a trooper (literally dashing out of the hospital to hand software off to me on a USB key at one point). But he hasn't been in the office, which means I've been directly supporting customers and potential customers. That's always fun (as a product manager, talking to customers is probably the most rewarding part of my job)--but it's hard to be direct support when trying to get a release out the door!

    But all's well that ends well. The release is done (!), and we've got at least one new customer this week (I hope we can release some details about it soon). We've also had some good talks with potential partners.

    Technorati tags:

    Tuesday, March 14, 2006

    Friday, March 10, 2006

    How will they host it: Writely or wrongly?

    Everybody has heard that Google bought Writely today. Congratulations to everyone involved.

    Not everyone knows that Writely is built on .NET.

    My question: who's going to host it?

    Is Google going to port Writely to run on the Googleplex? Will they host it on .NET? Does Google already have .NET hosting ability? Will they use Mono?

    To me, it's one of the most interesting facets of the acquisition.

    Technorati tags: , ,

    It's the Infrastructure

    Update 2006-03-14 11:31 Changed a nonsensical word.

    I spend a lot of time explaining the value of what Digipede does to people: whether it's potential customers, potential investors, my mom, or people riding next to me on BART--it's something I like talking about, and it's something I believe very strongly in. Add that together with my love for mellifluous voice, and it's a recipe for me talking about Digipede a lot!

    One thing that's come up frequently lately is the idea of grid infrastructure.

    Before I go too deeply into that, let me explain something about our customers. I can divide our customers neatly into two camps: customers who bought the Digipede Network to accelerate/scale/distribute a particular application, and customers who are using the Digipede Network for a much broader suite of applications.

    To the former group of customers, it's a tool. They had a particular need for a particular piece of software, and they found that the Digipede Network filled that need.

    To the second group of customers, though, it's much more than just a tool. It's now part of their development and IT infrastructures. We have one customer who is running at least three different applications on their grid now, and they will undoubtedly do more: they are porting current applications that need increased speed or scalability, but they are also looking at developing new applications. Their developers don't just have a new tool they can use: they also have an infrastructure that allows them to do something they've never been able to do before.

    Having a grid infrastructure in place means that developers can begin to take on assignments that seemed impossible before--analyzing more data, or deeper analysis of a particular trade. Having much, much more powerful software means that the users' lives change. Rather than analyze hundreds of trades, they can analyze thousands. Rather than run a trade run once a day (at night, when everyone has gone home), they can run it frequently throughout the day.

    A good grid infrastructure does more than just speed up an application. It changes the way the developers work. It changes the way the users work.

    Monday, March 06, 2006

    Snark it up

    Scoble, Carr, Winer and Searls have been snarking it up lately.

    Robert has written a hilarious XML extension that will finally allow everyone to have full control over their snark...

    Snark it up:

    To this end, I want to introduce the HyperText Snarkup Language (HTSL) which will initially be described as simply an extension of XHTML with a namespace. This will allow publishers to have full control over their snark.
    Technorati tags:

    Friday, March 03, 2006

    How's the generator business?

    Nicholas Carr is vigorously defending his stance (Is the server market doomed?) in his latest post (More thoughts on servers).

    I think he's still missing the boat.

    In response to some counterarguments from Charles Zedlewski and John Clingan, Carr says:

    Both argue that if servers become more efficient (through virtualization, for instance), then companies will tend to buy more of them, not fewer. If a product becomes more valuable, after all, you'll want more of it. That's a great point (for unit sales, if not for revenues), though I'm not sure it applies in this case. It's important to remember that what's really being consumed is computing cycles, not servers; through consolidation and virtualization companies may both consume a lot more cycles and buy a lot fewer boxes.

    I don't think Carr gets the point at all. He makes a great realization ("what is being consumed is compute cycles"), but he doesn't really follow through with the thought.

    The history of computing has shown very, very consistently: the consumption of compute cycles is on an ever-increasing path. Why? Because the faster computers get, the more uses people find for them. Carr seems to be implying that we have finally reached a point that our servers are doing everything they could possibly do: from here on out, making them faster will just diminish the number of servers we need.

    He writes "companies may both consume a lot more cycles and buy a lot fewer boxes," but his argument sounds more like "if the number of cycles they need doesn't increase too much, they'll be able to buy fewer boxes." That would doom the server market indeed.

    But that's not what happens with computers. Computers get faster. With each increase in speed, incredible new uses are found. They tax the machines. They need faster ones. Repeat.

    Virtualization is another great use for machines--but it won't keep software developers from innovating, and it won't keep companies from inventing new, faster servers. Those things will continue to happen.

    Moreover, Carr keeps blurring utility computing into his argument. Quoting Frank Sommers, he says:
    And with standard application interfaces, such as J2EE, shouldn't a company's IT department be able to deploy an enterprise app into a remote data center's hosting environment?
    Ah, there's the rub. There is no one standard. As I pointed out earlier, computes are not all alike.

    While I believe strongly that there is a market for selling computes and for data centers, there is no way that will doom the server market.

    And one last point to show that, while electricity is not computes, even the electricity analogy doesn't spell doom for the server companies. There was tremendous consolidation in the electric power industry when the idea of a "power plant" came about. But did that kill the industry that manufactures generators? No--there are still companies making billions of dollars manufacturing power generation equipment (I used to work for one of them). There is still tons of research going into ways to make power better.

    As I said before: the server industry isn't doomed. It's evolving.

    Because 1.21 gigaflops just aren't 1.21 gigawatts

    1.21 Gigawatts?  What was I thinking?

    In his normally insightful blog this week, Nicholas Carr made a rather off-the-wall suggestion. He posits that the server industry is doomed, and as proof he writes about trends he sees happening: shifts away from high-end servers toward either blades or grids of commodity, off-the-shelf hardware (COTS).

    He cites two examples: Sumitomo Mitsui Bank reduced 149 traditional servers with 14 blade servers, and Google runs all of its software on machines it assembles itself.

    Of course, I don't think these two examples prove much at all. After all, the blade systems that he is referring to need to be supplied by somebody--and I think we will continue to see the traditional server manufacturers continuing to move in that direction. Blades won't kill the server market; they'll be part of it.

    As for Google ("It buys cheap, commodity components and assembles them itself into vast clusters of computers"). Not every company is Google--we can't all buy so many machines that no one even notices when one dies. We can't all have people on staff to build our own machines, then spend their days roaming our aisles of racks pulling out the dead ones. Many of us depend on the quality that the server manufacturers deliver. Google isn't a typical company, or even a typical web company; one might say they're unique. So I don't think their lack of name-brand servers is a harbinger of doom.

    But Carr gets way off-track when he then suggests that utility computing will kill the server industry:

    If large, expert-run utility grids supplant subscale corporate data centers as the engines of computing, the need to buy branded servers would evaporate. The highly sophisticated engineers who build and operate the grids would, like Google's engineers, simply buy cheap subcomponents and use sophisticated software to tie them all together into large-scale computing powerplants.
    I've seen many references to utility computing before, and I just don't buy it.
    windmill
    Partly, it's just physics. All electrons look alike (let's not get into electron spin here: as far as my appliances are concerned, every electron looks the same). It doesn't matter to me if the power that's lighting up my life, running my refrigerator, and powering my PC came from a wind farm, a hydroelectric plant, or a diesel turbine. Well, for environmental reasons, I might prefer the former two, but the point is that when an electron gets to me, I can't tell where it came from.

    Computes just aren't the same. Computes look different on different operating systems. Not all software runs on all operating systems. Different people prefer different toolsets, and they always will. Some OSs are better for some things than others, and people choose the appropriate OSs for them. Yes, we've all read about "write once, run everywhere" software--but a small minority of software actually runs that way. OSs are different, and they will continue to be different. People will continue to write software that takes advantage of particular OSs.

    Not all compute problems can be "shipped out" easily. There are huge data concerns. First of all, there are the ubiquitous privacy and security issues: some data people just don't want leaving their building.

    Beyond that, though, there's the issue of data size and compute-to-byte ratio. If I need to do a quick search of a huge dataset I just collected from my genome lab or my jet design windtunnel, it may not make sense to move that to a "computing powerplant." Heck, I may be collecting tons of data in real time, and I need to analyze it in real time. I need my computes where my data is. As Jim Gray says, "Put the computation near the data." If your data are with you, that means your computes are with you as well.

    Don't get me wrong: I'm a big believer in distributed computing, and I'm a big believer in grid computing. But I don't think that, in the future, I'm going to flip on the "compute switch" the way I flip on a light switch today.

    Is the server market changing? Of course it is. Blades, virtualization, distributed computing: these are all changing the needs of the market. There will continue to be a high end market. There will continue to be a low end market. But utility computing will not kill servers.

    1.21 gigawatts? What was I thinking?

    Wednesday, March 01, 2006

    New SaaS book coming

    Scanning Robert's Expert Texture link blog, I see a link to Fred Chong over at MSDN. Fred is a solutions architect at Microsoft, and he's been thinking a lot about SaaS.

    In his latest post (SaaS is a journey, walk with us), he gives the table of contents of a book about that he and Gianpaolo are writing about SaaS.

    The outline looks great; his summary of key points for architects is perfect:

  • Scale the application

  • Enable multi-tenant data

  • Facilitate customization
  • At my previous startup, Energy Interactive, we offered a product in the electric industry that we offered as an ASP (that's what we called SaaS back in the twentieth century). We went through everything Fred describes--only we weren't lucky enough to have a book to help us along!

    The issues we had with scaling are part of what convinced us that we needed to start Digipede--it just wasn't easy to scale an application. When we looked around our datacenter and saw so many servers that could have been used (and even more when we looked around the rest our enterprise), we wanted an easy way to take an application and scale it out. The tools just weren't out there.

    One nit with the table of contents: Fred's chapter on scaling doesn't seem to address distributed or grid computing at all. I'm a little surprised by that. Given the adaptability of grid to service oriented architecture (and I certainly view SaaS as a flavor of SOA) that has been noted lately by experts like Lee Liming and Greg Nawrocki, it seems that Fred and Gianpaolo would mention it. Fred lists the following issues:
  • Pools: thread, connections etc.

  • Async

  • Locks

  • States

  • UI/Presentation
  • How can you scale that application to a cluster? To a data center? How do you plan your hardware so you can handle peak demand without overspending? Whether grid is the answer or not, I'm certain that much SaaS will involve some flavor of distributed computing, and I hope these two go into some detail there.

    By the way, it looks like Fred and Gianpaolo are going to cover a lot more than just technical issues--they've got chapters planned on everything from Business Model to Security to Instrumentation and Monitoring. That's fantastic. SaaS is so different in so many ways from traditional enterprise software, and I'm glad to see Microsoft folks lending their considerable wisdom and experience to help people along.