there are so many cores

Just another site

Big compute on small iron

The needs of the many applications outweigh the needs of the few platforms, or the one. –Spock

Applications should not serve the needs of platforms. Platforms should serve the needs of applications. –V

I should support Java. C/C++ and Java cover the major app platforms. For many applications, Java is close to C/C++ in performance. (By close, I mean C/C++ is not usually 10x faster, it might be 2x faster.)

The odd man is Python. What has held scripting languages back is that they are too slow (like 100x slower than C/C++). Python JIT technology could change this. However, this is very new. It will be years before it could approach the maturity of HotSpot.

One reason people write JVM based languages like Scala is to get the HotSpot JIT for free. They write a virtual machine in Java. The new language platform is actually a Java application and benefits from HotSpot just like any other.

I believe that HotSpot has turned out to be a key strategic reason Java is not going away. Writing JITs is very difficult. The technology is changing with LLVM. What I am doing today would be impossible without it. In a similar way, it is easier for JIT designers now. But still Sun poured years of work into HotSpot which now works very well.

A guess – last I saw, HotSpot did not perform register optimizations. It does not vectorize. If processor architecture changes enough so there are 128 cores on a CPU with an attached GPU on the same die, then I could see other platforms displacing Java.

I was at the Legion of Honor yesterday. There was the head of Medusa on a pedestal. A big crowd gathered around it. One young woman had an iPad which she used like a camera. She may have been using it as a map too.

As others have observed, the future of digital cameras is mostly devices like this. With enough computational power, an iPad could show a “heads up display” and augment reality in a compelling enough way to be useful. I’m imagining something that is dynamically doing HDR and photostitching a bundle adjusted scene geometry (ok, I know that sounds crazy today). That would take a massive amount of power but be so cool.

At the last meetup, Robert Rossi observed that he was trying to invert the use of GPUs. The GPU takes geometry and renders a scene image. But you can also solve the inverse problem and use the GPU to take a scene image and calculate geometry. That is computer vision.

So my next evolutionary step is becoming a roadmap. The virtual machine needs to be configurable. Why? To support the needs of applications, some of which will be written in languages other than C/C++.

Does this seem more like the sort of thing with speculative value? It ties into a clear future trend in hardware. Big iron is becoming the cloud. Like the mainframe or supercomputers, it will always be there. But the market is comparatively small (even though that big iron is critical to our society). The growth is in smaller devices, not just phones and tablets but also netbooks, ultraportables and laptops. Power efficiency is important. So is graphics (as almost all notebook screens are 16:9, they are really optimized for watching HD movies and playing video games).

By the way, I’m pretty close to an alpha release. I still want another round of testing first. I’m kind of amazed how many little things I’ve caught over the last few days, not so much bugs in code but usability issues. I don’t want to release something with clearly poor fit and finish.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: