Sunday, December 31, 2017

A fast React Live Editing implementation

A fast React Live Editing implementation



Hi folks, part of my personal mission to integrate good ol' Java with modern JavaScript frameworks was the implementation of "Hot Reload". I am not talking of "automatic Browser refresh" but of live patching the code of a React client without losing component state.

This blog has moved, continue reading here

Tuesday, August 15, 2017

Developing a React+JSX app from Java without node, webpack, and babel



Implementing a web app server side using Java has been getting ugly recently as its required to run a a full nodejs stack including a load of npm packages and javascript related stuff. By implementing a bundling and JSX Transpilation in native Java, kontraktor can help out and make things simple again (#MTSA).

you can't have a java article without picturing a cup of coffee ..
Gone are the days of server side web apps, though there are apparently still people out there trying to get something fancy out of .. [this blog has moved, continue reading here]


Saturday, February 25, 2017

Move Code Not Data: 'Sandboxing' using Remote Lambda Execution


Leveraging remote lambda execution to avoid being IO bound in a data intensive  microservice system. 


This blog has moved. Read the full article here.

Tuesday, November 29, 2016

Practice Report: A webapp with Polymer.js




Part two of a series of our journey building juptr.io:
Our experiences building a front end with google's Polymer.js

Why Polymer ?
There are numerous alternatives when choosing a framework to build web front ends some of most popular probably being ... [this blog has moved, continue reading here]

Saturday, November 12, 2016

Make distributed programming great again: Microservices with Distributed Actor Systems

Part one of a series of our journey building juptr.io: 
How a fundamental abstraction streamlines, simplifies and speeds up developing a web scale system


goes higher, faster and avoids dirty feet :)
Juptr.io is a content personalization, sharing and authoring platform. We crawl 10'th of thousands of blogs and media sites (german + english), then classify the documents to enable personalized content curation, consumption & discussion.
This requires
  • to crawl, store, query, analyze and classify millions of documents
  • horizontal scalability
  • a failure resistant distributed (micro)services architecture
  • java, javascript interoperability
Compared to ordinary enterprise-level projects, a startup has much tighter time and budget constraints (aka startup in europe), so reduction of development effort was a major priority.
We decided to use the following foundation stack .. [this blog has moved, continue reading here]

Wednesday, October 19, 2016

Word disambiguation combining LDA and word embeddings
Topical search tries to take into account the occurence of search-term-related words to avoid matching irrelevant documents. This post lines out how LDA can be used to disambiguate a word-set obtained using word embeddings.  

Texting Bots: The command line interface rebranded ?


Texting Bots and Chat Bots are all the hype, but is typing to a mostly dumb AI the way users want to interact with software ? Maybe other aspects besides AI and texting are the real killer feature of chat bots.

Friday, November 20, 2015

Clarification and Fix for Serialization Exploit

There have been reports of a Java serialization based epxloit (e.g. here), enabling the attacker to execute arbitrary code. As usual, there is a lot of half baked reasoning about this issue, so I'd like to present a fix for my fast-serialization library as well as tweaks to block deserialization of certain classes for stock JDK serialization.

In my view the issue is not located in Java serialization, but insecure programming in some common open source libraries.

Attributing this exploit to Java Serialization is wrong, the exploit requires door opening non-jdk code to be present on the server.

Unfortunately some libraries accidentally do this, however its very unlikely this is a wide spread (mis-) programming pattern.


How can a class at server side open the door ?


  • the class has to define readObject() 
  • the readObject() method has to be implemented in a way that allows to execute code sent from remote

I struggle to imagine a sane use case on why one would do something along the line of

"readObject(ObjectInputStream in) { Runtime.exec(in.readString()); }"

or

"readObject(ObjectInputStream in) { define_and_execute_class(in.readBytes()); }"

Anyway, it happened. Again: something like the code above must be present on the class path at server side, it cannot be injected from outside.



Fixing this for fast-serialization


  • fast-serialization 2.43 introduces a delegate interface, which enables blacklisting/whitelisting of classes, packages or whole package trees. By narrowing down classes being allowed for serialization, one can assure only expected objects are deserialized.



  • if you cannot upgrade to fst 2.43 for backward compatibility reasons, at least register a custom serializer for the problematic classes (see exploit details). As registered custom serializers are processed by fast-serialization before falling back to JDK serialization emulation, throwing an exception in the read or instantiate method of the custom serializer will effectively block the exploit.
  • There is no significant performance impact as this is executed on initialization only



Hotfixing stock JDK Serialization implementation

(Verfied for JDK 1.8 only)

Use a subclass of ObjectInputStream instead of the original implementation and put in a blacklisting/whitelisting by overriding this method in ObjectInputStream:



(maybe also check resolveProxyClass, I'm not sure wether this also can get exploited [probably not]).
This will have a performance impact, so its crucial to have a efficient implementation of blacklisting/whitelisting here.




Friday, July 31, 2015

Polymer WebComponents Client served by Java Actors

Front end development productivity has been stalling for a long time. I'd even say regarding productivity, there was no substantial improvement since the days of Smalltalk (Bubble Event Model, MVC done right, component based).

Weird enough, for a long time things got worse: Document centric web technology made classic and proven component centric design patterns hard to implement.

For reasons unknown to me (fill in some major raging), it took some 15 years until a decent and well thought webcomponent standard (proposal) arised: WebComponents. Its kind an umbrella of several standards, most importantly:
  • Html Templates - the ability to have markup snippets which are not "seen" and processed by the browser but can be "instantiated" programmatically later on.
  • Html Imports - finally the ability to include a bundle of css, html and javascript implicitely resolving dependencies by not loading an import twice. Style does not leak out to the main document.
  • Shadow Dom
It would not be a real web technology without some serious drawbacks: Html Imports are handled by the browser, so you'll get many http requests when loading a webcomponent app using html imports to resolve dependencies.
As the Html standard is strictly separated from the Http protocol, simple solutions such as server side imports (to reduce number of http requests) can never be part of an official standard.

Anyway, with Http-2's pipelining and multiplexing features html-imports won't be an issue within some 3 to 5 years (..), until then some workarounds and shim's are required.

Github repo for this blog "Buzzword Cloud": kontraktor-polymer-example

Live Demohttp://46.4.83.116/polymer/ (just sign in with anything except user 'admin'). Requires Websocket connectivity, modern browser (uses webcomps-lite.js, so no IE). Curious wether it survives ;-).

Currently Chrome and Opera implement WebComponents, however there is a well working polyfill for mozilla + safari and a last resort polyfill for ie.
Using a server side html import shim (as provided by kontraktor's http4k), web component based apps run on android 4.x and IOS safari.

(Polymer) WebComponents in short



a web component with dependencies (link rel='import'), embedded style, a template and code


Dependency Management with Imports ( "<link rel='import' ..>" )

The browser does not load an imported document twice and evaluates html imports linearly. This way dependencies are resolved in correct order implicitly. An imported html snippet may contain (nearby) arbitrary html such as scripts, css, .. most often it will contain the definition of a web component.

Encapsulation

Styles and templates defined inside an imported html component do not "leak" to the containing document.
Web components support data binding (one/two way). Typically a Web component coordinates its direct children only (encapsulation). Template elements can be easily accessed with "this.$.elemId".

Application structure

An application also consists of components. Starting from the main-app component, one subdivides an app hierarchically into smaller subcompontents, which has a nice self-structuring effect, as one creates reusable visual components along the way.

The main index.html typically refers to the main app component only (kr-login is an overlay component handling authentication with kontraktor webapp server):



That's nice and pretty straight forward .. but lets have a look what my simple sample application's footprint looks like:

  Web Reality:


hm .. you might imagine how such an app will load on a mobile device, as the number of concurrent connections typically is limited to 2..6 and a request latency of 200 to 500ms isn't uncommon. As bandwidth increases continously, but latency roughly stays the same reducing the number of requests pays off even at cost of increased initial bandwidth for many apps.

In order to reduce the number of requests and bandwidth usage, the JavaScript community maintains a load of tools minifying, aggregating and preprocessing resources. When using Java at the server side, one ends up having two build systems, gradle or maven for your java stuff, node.js based javascript tools like bower, grunt, vulcanize etc. . In addition a lot of temporary (redundant) artifacts are created this way.

As Java web server landscape mostly sticks to server-centric "Look-'ma-we-can-do-ze-web™" applications, its hardly possible to make use of modern javascript frameworks using Java at server side, especially as REAL JAVA DEVELOPERS DO NOT SIMPLY INSTALL NODE.JS (though they have a windows installer now ;) ..). Nashorn unfortunately isn't yet there, currently it fails to replace node.js due to missing API or detail incompatibilities.





I think its time for an unobtrusive pointer to my kontraktor actor library, which abstracts away most of the messy stuff allowing for direct communication of JavaScript and Java Actors via http or websockets.



Even when serving single page applications, there is stuff only a web server can implement effectively:

  • dynamically inline Html-Import's 
  • dynamically minify scripts (even inline javascript)

In essence inlining, minification and compression are temporary wire-level optimizations, no need to add this to the build and have your project cluttered up. Kontraktor's Http4k optimizes served content dynamically if run in production mode.
The same application with (kontraktor-)Http4k in production mode:


even better: As imports are removed during server side inlining, the web app runs under Safari IOS and Android 4.4.2 default browser.

Actor based Server side

Wether its a TCP Server Socket accepting client(-socket)s or a client browser connecting a web server: its structurally the same pattern:

1. server listens for authentication/connection requests.
2. server authenticates/handles a newly connecting client and instantiates/opens a client connection (webapp terminology: session).

What's different when comparing a TCP server and a WebApp Http server is
  • underlying transport protocol
  • message encoding
As kontraktor maps actors to a concrete transport topology at runtime, a web app server application does not look special:


a generic server using actors

The decision on transport and encoding is done by publishing the server actor. The underlying framework (kontraktor) then will map the high level behaviour description to a concrete transport and encoding. E.g. for websockets + http longpoll transports it would look like:


On client side, js4k.js implements the api required to talk to java actors (using a reduced tell/ask - style API).

So with a different transport mapping, a webapp backend might be used as an in-process or tcp-connectable service.

So far so good, however a webapp needs html-import inlining, minification and file serving ...
At this point there is an end to the abstraction game, so kontraktor simply leverages the flexibility of RedHat's Undertow by providing a "resource path" FileResource handler.


The resource path file handler parses .html files and inlines+minifies (depending on settings) html-import's, css links and script tags. Ofc for development this should be turned off.
The resource path works like Java's classpath, which means a referenced file is looked up starting with the first path entry advancing along the resource path until a match is found. This can be nice in order to quickly switch versions of components used and keep your libs organized (no copying required).

As html is fully parsed by Http4k (JSoup parser ftw), its recommended to keep your stuff syntactically correct. In addition keep in mind that actors require non-blocking + async implementation of server side logic, have your blocking database calls "outsourced" to kontraktors thread pool like



Conclusion:
  • javascript frameworks + web standards keep improving. A rich landscape of libraries and ready to use components has grown.
  • they increasingly come with node.js based tooling 
  • JVM based non-blocking servers scale better and have a much bigger pool of server side software components
  • kontraktor http4k + js4k.js helps closing the gap and simplifies development by optimizing webapp file structure and size dynamically and abstracting (not annotating!) away irrelevant details and enterprisey ceremony.

Saturday, June 27, 2015

Don't REST ! Revisiting RPC performance and where Fowler might have been wrong ..

[Edit: Title is a click bait of course, Fowler is aware of the async/sync issues, recently posted http://martinfowler.com/articles/microservice-trade-offs.html with clarifying section regarding async. ]

Hello my dear citizens of planet earth ...

There are many good reasons to decompose large software systems into decoupled message passing components (team size + decoupling, partial + continuous software delivery, high availability, flexible scaling + deployment architecture, ...).

With distributed applications, there comes the need for ordered point to point message passing. This is different to client/server relations, where many clients send requests at low rate and the server can choose to scale using multiple threads processing requests concurrently.
Remote Messaging performance is to distributed systems what method invocation performance is for non-distributed monolithic applications.
(guess what is one of the most optimized areas in the JVM: method invocation)

[Edit: with "REST", I also refer to HTTP based webservice style API, this somewhat imprecise]

Revisiting high level remoting Abstractions

There were various attempts at building high-level, location transparent abstractions (e.g. corba, distributed objects), however in general those idea's have not received broad acceptance.

This article of Martin Fowler sums up common sense pretty well:

http://martinfowler.com/articles/distributed-objects-microservices.html

Though not explicitely written, the article implies synchronous remote calls, where a sender blocks and waits for a remote result to arrive thereby including cost of a full network roundtrip for each remote call performed.

With asynchronous remote calls, many of the complaints do not hold true anymore. When using asynchronous message passing, the granularity of remote calls is not significant anymore.

"course grained" processing

remote.getAgeAndBirth().then( (age,birth) -> .. );

is not significantly faster than 2 "fine grained" calls

all( remote.getAge(), remote.getBirth() ) 
   .then( resultArray -> ... ); 

as both variants include network round trip latency only once.

On the other hand with synchronous remote calls, every single remote method call has a penalty of one network round trip, and only then Fowlers arguments hold.

Another element changing the picture is the availability of "Spores", a snippet of code which can be passed over the network and executed at receiver side e.g.

remote.doWithPerson( "Heinz", heinz -> {
    // executed remotely, stream data back to callee
    stream( heinz.salaries().sum() / 12 ); finish();
}).then( averageSalary -> .. );

Spore's are implementable effectively with availability of VM's and JIT compilation.

Actor Systems and similar asynchronous message passing approaches have gained popularity in recent years. Main motivation was to ease concurrency and the insight that multithreading with shared data does not scale well and is hard to master in an industrial grade software development environment.

As large servers in essence are "distributed systems in a box", those approaches apply also for distributed systems.

Following I'll test performance of remote invocations of some frameworks. I'd like to proof that established frameworks are far from what is technically possible and want to show that popular technology choices such as REST are fundamentally inept to form the foundation of large and fine grained distributed applications.


Test Participants

Disclaimer: As tested products are of medium to high complexity, there is danger of misconfiguration or test errors, so if anybody has a (verfied) improvement to one of the testcases, just drop me a comment or file an issue to the github repository containing the tests:

https://github.com/RuedigerMoeller/remoting-benchmarks.

I verified by googling forums etc. that numbers are roughly in line with what others have observed.

Features I expect from a distributed application framework:
  • Ideally fully location transparent. At least there should be a concise way (e.g. annotations, generators) to do marshalling half-automated. 
  • It is capable to map responses to their appropriate request callbacks automatically (via callbacks or futures/promises or whatever).
  • its asynchronous

products tested (Disclaimer: I am the author of kontraktor):
  • Akka 2.11
    Akka provides a high level programming interface, marshalling and networking details are mostly invisible to application code (full location transparency).
  • Vert.x 3.1
    provides a weaker level of abstraction compared to actor systems, e.g. there are no remote references. Vert.x has a symbolic notion of network communication (event bus, endpoints).
    As it's "polyglot", marshalling and message encoding need some manual support.
    Vert.x is kind of a platform and addresses many practical aspects of distributed applications such as application deployment, integration of popular technology stacks, monitoring, etc.
  • REST (RestExpress)
    As Http 1.1 based REST is limited by latency (synchronous protocol), I just choosed this more or less randomly.
  • Kontraktor 3, distributed actor system on Java 8. I believe it hits a sweet spot regarding performance, ease of use and mind model complexity. Kontraktor provides a concise, mostly location transparent high level programming model (Promises, Streams, Spores) supporting many transports (tcp, http long poll, websockets).

Libraries skipped:
  • finagle - requires me to clone and build their fork of thrift 0.5 first. Then I'd have to define thrift messages, then generate, then finally run it. 
  • parallel universe - at time of writing the actor remoting was not in a testable state ("Galaxy" is alpha), examples are without build files, the gradle build did not work. Once i managed to build, the programs where expecting configuration files which I could not find. Maybe worth a revisit (accepting pull requests as well :) ).

The Test

I took a standard remoting example:
The "Ask" testcase: 
The sender sends a message of two numbers, the remote receiver answers with the sum of those 2 numbers. The remoting layer has to track and match requests and responses as there can be tens of thousand "in-flight". 
The "Tell" testcase: 
Sender sends fire-and forget. No reply is sent from the receiver. 

Results

Attention: Don't miss notes below charts.

Platform: Linux Centos 7 dual socket 20 real cores @2.5 GHZ, 64GB ram. As the tests are ordered point to point, none of the tests made use of more than 4 cores.

tell Sum
(msg/second)
ask Sum (msg/second)
Kontraktor Idiomatic1.900.000860.000
Kontraktor Sum-Object1.450.000795.000
Vert.x 3 200.000200.000
AKKA (Kryo)120.00065.000
AKKA70.00064.500
RestExpress15.00015.000
Rest >15 connections48.00048.000

let me chart that for you ..



Remarks:

  • Kontraktor 3 outperforms by a huge margin. I verified the test is correct and all messages are transmitted (if in doubt just clone the git repo and reproduce). 
  • Vert.x 3 seems to have a built-in rate limiting. I saw peaks of 400k messages/second however it averaged at 200k (hints for improvement welcome). In addition, the first connecting sender only gets 15k/second throughput. If I stop and reconnect, throughput is as charted.
    I tested the very first Vert.x 3 final release. For marshalling fast-serialization (FST) was used (also used in kontraktor). Will update as Vert.x 3 matures
  • Akka. I spent quite some time on improving the performance with mediocre results. As Kryo is roughly of same speed as fst serialization, I'd expect at least 50% of kontraktor's performance.

    Edit:
    Further analysis shows, Akka is hit by poor serialization performance. It has an option to use Protobuf for encoding, which might improve results (but why kryo did not help then ?).

    Implications of using protobuf:
    * need each message be defined in a .proto file, need generator to be run
    * frequently additional datatransfer is done like "app data => generated messages => app data"
    * no reference sharing support, no cyclic object graphs can be transmitted
    * no implicit compression by serialization's reference sharing.
    * unsure wether the ask() test profits as it did not profit from Kryo as well
    * Kryo performance is in the same ballpark as protobuf but did not help that much either.

    Smell: I had several people contacting me aiming to improve Akka results. They disappear somehow.

    Once I find time I might add a protobuf test. Its a pretty small test program, so if there was an easy fix, it should not be a huge effort to provide it. The git repo linked above contains a maven buildable ready to use project.
  • REST. Poor throughput is not caused by RestExpress (which I found quite straight forward to use) but by Http1.1's dependence on latency. If one moves a server to other hardware (e.g. different subnet, cloud), throughput of a service can change drastically due to different latency. This might change with Http 2.
    Good news is: You can <use> </any> <chatty> { encoding: for messages }, as it won't make a big difference for point to point REST performance.
    Only when opening many connections (>20) concurrently, throughput increases. This messes up transaction/message ordering, so can only be used for idempotent operations (a species mostly known from white papers and conference slides, rarely seen in the wild).


Misc Observations


Backpressure

Sending millions of messages as fast as possible can be tricky to implement in a non-blocking environment. A naive send loop
  • might block the processing thread 
  • build up a large outbound queue as put is faster than take+sending. 
  • can prevent incoming Callbacks from being enqueued + executed (=Deadlock or OOM).
Of course this is a synthetic test case, however similar situations exist e.g. when streaming large query results or sending large blob's to other node's (e.g. init with reference data).

None of the libraries (except rest) did that out of the box:
  • Kontraktor requires a manual increase of queue sizes over default (default is 32k) in order to not deadlock in the "ask" test. In addition its required to programatically adopt send rate by using the backpressure signal raised by the TCP stack (network send blocks). This can be done non-blocking, "offer()" style.
  • For VertX i used a periodic task sending a burst of 500 to 1000 messages. Unfortunately the optimal number of messages per burst depends on hardware performance, so the test might need adoption when run on e.g. a Laptop.
  • For Akka I send 1 million messages each 30 seconds in order to avoid implementation of application level flow control. It just queued up messages and degrades to like 50 msg/s when used naively (big loop).
  • REST was not problematic here (synchronous Http1.1 anyway). Degraded by default.



Why is kontraktor remoting that faster ?
  • premature optimization 
  • adaptive batching works wonders, especially when applied to reference sharing serialization.
  • small performance compromises stack up, reduce them bottom up 
Kontraktor actually is far from optimal. It still generates and serializes a "MethodCall() { int targetId, [..], String methodName, Object args[]}" for each message remoted. It does not use Externalizable or other ways of bypassing generic (fast-)serialization.
Throughputs beyond 10 million remote method invocations/sec have been proved possible at cost of a certain fragility + complexity (unique id's and distributed systems ...) + manual marshalling optimizations.


Conclusion

  • As scepticism regarding distributed object abstractions is mostly performance related, high performance asynchronous remote invocation is a game changer
  • Popular libraries have room for improvement in this area 
  • Don't use REST/Http for inter-system connect, (Micro-) Service oriented architectures. Point to point performance is horrible. It has its applications in the area of (WAN) web services / platform neutral, easily accessible API's and client/server patterns.
  • Asynchronous programming is different, requires different/new solution patterns (at source code level). It is unavoidable to learn use of asynchronous messaging primitives.
    "Pseudo Synchronous" approaches (e.g. fibers) are good in order to better scale multithreading, but do not work out for distributed systems.
  • lack of craftsmanship can kill visions.


Monday, December 22, 2014

A persistent KeyValue Server in 40 lines and a sad fact

This post originally was submitted to the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

It also has been published on voxxed.com .

picking up Peters well written overview on the uses of Unsafe, i'll have a short fly-by on how low level techniques in Java can save development effort by enabling a higher level of abstraction or allow for Java performance levels probably unknown to many.

My major point is to show that conversion of Objects to bytes and vice versa is an important fundamental, affecting virtually any modern java application.

Hardware enjoys to process streams of bytes, not object graphs connected by pointers as "All memory is tape" (M.Thompson if I remember correctly ..).

Many basic technologies are therefore hard to use with vanilla Java heap objects:
  • Memory Mapped Files - a great and simple technology to persist application data safe, fast & easy.
  • Network communication is based on sending packets of bytes
  • Interprocess communication (shared memory)
  • Large main memory of today's servers (64GB to 256GB). (GC issues)
  • CPU caches work best on data stored as a continuous stream of bytes in memory
so use of the Unsafe class in most cases boil down in helping to transform a java object graph into a continuous memory region and vice versa either using
  • [performance enhanced] object serialization or
  • wrapper classes to ease access to data stored in a continuous memory region.
(Code & examples of this post can be found here)

    Serialization based Off-Heap

    Consider a retail WebApplication where there might be millions of registered users. We are actually not interested in representing data in a relational database as all needed is a quick retrieve of user related data once he logs in. Additionally one would like to traverse the social graph quickly.

    Let's take a simple user class holding some attributes and a list of 'friends' making up a social graph.


    easiest way to store this on heap, is a simple huge HashMap.
    Alternatively one can use off heap maps to store large amounts of data. An off heap map stores its keys and values inside the native heap, so garbage collection does not need to track this memory. In addition, native heap can be told to automagically get synchronized to disk (memory mapped files). This even works in case your application crashes, as the OS manages write back of changed memory regions.

    There are some open source off heap map implementations out there with various feature sets (e.g. ChronicleMap), for this example I'll use a plain and simple implementation featuring fast iteration (optional full scan search) and ease of use.

    Serialization is used to store objects, deserialization is used in order to pull them to the java heap again. Pleasantly I have written the (afaik) fastest fully JDK compliant object serialization on the planet, so I'll make use of that.


     Done:
    • persistence by memory mapping a file (map will reload upon creation). 
    • Java Heap still empty to serve real application processing with Full GC < 100ms. 
    • Significantly less overall memory consumption. A user record serialized is ~60 bytes, so in theory 300 million records fit into 180GB of server memory. No need to raise the big data flag and run 4096 hadoop nodes on AWS ;).

    Comparing a regular in-memory java HashMap and a fast-serialization based persistent off heap map holding 15 millions user records, will show following results (on a 3Ghz older XEON 2x6):

    consumed Java Heap (MB)Full GC (s)Native Heap (MB)get/put ops per srequired VM size (MB)
    HashMap6.865,0026,03903.800.000,00
    12.000,00
    OffheapMap (Serialization based)
    63,00
    0,026
    3.050
    750.000,00
    500,00

    [test source / blog project] Note: You'll need at least 16GB of RAM to execute them.

    As one can see, even with fast serialization there is a heavy penalty (~factor 5) in access performance, anyway: compared to other persistence alternatives, its still superior (1-3 microseconds per "get" operation, "put()" very similar).

    Use of JDK serialization would perform at least 5 to 10 times slower (direct comparison below) and therefore render this approach useless.


    Trading performance gains against higher level of abstraction: "Serverize me"


    A single server won't be able to serve (hundreds of) thousands of users, so we somehow need to share data amongst processes, even better: across machines.

    Using a fast implementation, its possible to generously use (fast-) serialization for over-the-network messaging. Again: if this would run like 5 to 10 times slower, it just wouldn't be viable. Alternative approaches require an order of magnitude more work to achieve similar results.

    By wrapping the persistent off heap hash map by an Actor implementation (async ftw!), some lines of code make up a persistent KeyValue server with a TCP-based and a HTTP interface (uses kontraktor actors). Of course the Actor can still be used in-process if one decides so later on.


    Now that's a micro service. Given it lacks any attempt of optimization and is single threaded, its reasonably fast [same XEON machine as above]:
    • 280_000 successful remote lookups per second 
    • 800_000 in case of fail lookups (key not found)
    • serialization based TCP interface (1 liner)
    • a stringy webservice for the REST-of-us (1 liner).
    [source: KVServer, KVClient] Note: You'll need at least 16GB of RAM to execute the test.

    A real world implementation might want to double performance by directly putting received serialized object byte[] into the map instead of encoding it twice (encode/decode once for transmission over wire, then decode/encode for offheaping map).

    "RestActorServer.Publish(..);" is a one liner to also expose the KVActor as a webservice in addition to raw tcp:




    C like performance using flyweight wrappers / structs

    With serialization, regular Java Objects are transformed to a byte sequence. One can do the opposite: Create  wrapper classes which read data from fixed or computed positions of an underlying byte array or native memory address. (E.g. see this blog post).

    By moving the base pointer its possible to access different records by just moving the the wrapper's offset. Copying such a "packed object" boils down to a memory copy. In addition, its pretty easy to write allocation free code this way. One downside is, that reading/writing single fields has a performance penalty compared to regular Java Objects. This can be made up for by using the Unsafe class.

    "flyweight" wrapper classes can be implemented manually as shown in the blog post cited, however as code grows this starts getting unmaintainable.
    Fast-serializaton provides a byproduct "struct emulation" supporting creation of flyweight wrapper classes from regular Java classes at runtime. Low level byte fiddling in application code can be avoided for the most part this way.



    How a regular Java class can be mapped to flat memory (fst-structs):


    Of course there are simpler tools out there to help reduce manual programming of encoding  (e.g. Slab) which might be more appropriate for many cases and use less "magic".

    What kind of performance can be expected using the different approaches (sad fact incoming) ?

    Lets take the following struct-class consisting of a price update and an embedded struct denoting a tradable instrument (e.g. stock) and encode it using various methods:

    a 'struct' in code

    Pure encoding performance:

    Structsfast-Ser (no shared refs)fast-SerJDK Ser (no shared)JDK Ser
    26.315.000,007.757.000,005.102.000,00649.000,00644.000,00




    Real world test with messaging throughput:

    In order to get a basic estimation of differences in a real application, i do an experiment how different encodings perform when used to send and receive messages at a high rate via reliable UDP messaging:

    The Test:
    A sender encodes messages as fast as possible and publishes them using reliable multicast, a subscriber receives and decodes them.

    Structsfast-Ser (no shared refs)fast-SerJDK Ser (no shared)JDK Ser
    6.644.107,004.385.118,003.615.584,0081.582,0079.073,00

    (Tests done on I7/Win8, XEON/Linux scores slightly higher, msg size ~70 bytes for structs, ~60 bytes serialization).

    Slowest compared to fastest: factor of 82. The test highlights an issue not covered by micro-benchmarking: Encoding and Decoding should perform similar, as factual throughput is determined by Min(Encoding performance, Decoding performance). For unknown reasons JDK serialization manages to encode the message tested like 500_000 times per second, decoding performance is only 80_000 per second so in the test the receiver gets dropped quickly:

    "
    ...
    ***** Stats for receive rate:   80351   per second *********
    ***** Stats for receive rate:   78769   per second *********
    SUB-ud4q has been dropped by PUB-9afs on service 1
    fatal, could not keep up. exiting
    "
    (Creating backpressure here probably isn't the right way to address the issue ;-)  )

    Conclusion:
    • a fast serialization allows for a level of abstraction in distributed applications impossible if serialization implementation is either
      - too slow
      - incomplete. E.g. cannot handle any serializable object graph
      - requires manual coding/adaptions. (would put many restrictions on actor message types, Futures, Spore's, Maintenance nightmare)
    • Low Level utilities like Unsafe enable different representations of data resulting in extraordinary throughput or guaranteed latency boundaries (allocation free main path) for particular workloads. These are impossible to achieve by a large margin with JDK's public tool set.
    • In distributed systems, communication performance is of fundamental importance. Removing Unsafe is  not the biggest fish to fry looking at the numbers above .. JSON or XML won't fix this ;-).
    • While the HotSpot VM has reached an extraordinary level of performance and reliability, CPU is wasted in some parts of the JDK like there's no tomorrow. Given we are living in the age of distributed applications and data, moving stuff over the wire should be easy to achieve (not manually coded) and as fast as possible. 

    Addendum: bounded latency

    A quick Ping Pong RTT latency benchmark showing that java can compete with C solutions easily, as long the main path is allocation free and techniques like described above are employed:



    [credits: charts+measurement done with HdrHistogram]

    This is an "experiment" rather than a benchmark (so do not read: 'Proven: Java faster than C'), it shows low-level-Java can compete with C in at least this low-level domain.
    Of course its not exactly idiomatic Java code, however its still easier to handle, port and maintain compared to a JNI or pure C(++) solution. Low latency C(++) code won't be that idiomatic either ;-)

    About me: I am a solution architect freelancing at an exchange company in the area of realtime GUIs, middleware, and low latency CEP (Complex Event Processing) nightly hacking at https://github.com/RuedigerMoeller.


    Tuesday, October 28, 2014

    The Internet is running in debug mode

    With the rise of the Web, textual encodings like xml/JSON have become very popular. Of course textual message encoding has many advantages from a developer's perspective. What most developers are not aware of, is how expensive encoding and decoding of textual messages really is compared to a well defined binary protocol.

    Its common to define a system's behaviour by its protocol. Actually, a protocol messes up two distinct aspects of communication, namely:

    • Encoding of messages
    • Semantics and behavior (request/response, signals, state transition of communication parties ..)

    Frequently (not always), these two very distinct aspects are mixed up without need. So we are forced to run the whole internet in "debug mode", as 99% of webservice and webapp communication is done using textual protocols.

    The overhead in CPU consumption compared to a well defined binary encoding is factor ~3-5 (JSON) up to >10-20 (XML). The unnecessary waste of bandwith also adds to that greatly (yes you can zip, but this in turn will waste even more CPU).

    I haven't calculated the numbers, but this is environmental pollution at big scale. Unnecessary CPU consumption to this extent wastes a lot of energy (global warming anyone ?).

    Solution is easy:
    • Standardize on some simple encodings (pure binary, self describing binary ("binary json"), textual)
    • Define the behavioral part of a protocol (exclude encoding)
    • use textual encoding during development, use binary in production.
    Man we could save a lot of webserver hardware if only http headers could be binary encoded ..