Category Archives: Programming

Why I left Stack Overflow

I decided to leave stack overflow after about 6 years. Two realizations lead to this decision.

A – moderators and editors are much more valued than people who actually create content. For every question I tried to ask, or tried to answer, I have been talking more with moderators than I actually spend time writing it. Just a grab from my experience,

  • you write a sensible answer, some mega moderator just deletes it because he believes it is link spam. After sending an email that it is not link spam the post gets reinstated.
  • a shitload of stupid questions coming in. Bad grammar, unclear what they are asking and so on. With a signal/noise level of about -24 dB it is very very bad.
  • there are so many disrespectful people who actually demand that you look at their question. “It has to be a completely self standing solution before I will accept the answer” is not uncommon at the site.
  • but if you yourself ask a question, it is downvoted right away, without any explanation.
  • people who game the system: You answer someones question, someone comments on your answer and adds a detail. Then that same person takes your answer, reposts it 2 minutes later, and claims the reward.
  • people who think that downvoting an answer to a question they don’t like is the correct thing to do
  • people who cannot read a question and keep insisting that they cannot give legal advice. This resulted in a of close/reopen/close/reopen round for that particular question.
  • idiots who place a bounty on their question, yet after you answer them they do not award the bounty.
  • yet stack overflow that will not return the bounty to the original poster if not awarded.
  • and of course people who are not really interested in the question. You put a bounty on a question you have. About a day before the deadline expires someone ‘answers’ the question by well formulating some general things I already knew. Clearly hoping to get more than 2 points so he can claim his non-answer.

Thus we end up in a situation where the teachers (those with high reputation) suck the livelihood out of content creators, who maybe don’t need endless discussions. By doing so, they create a new class that will tend to behave the same.

Because the above problems are systemic I decided to leave.

B- There is however a second reason. Namely, reputation is just a number, it doesn’t mean anything.  I really do not need to “play the game”.

What do I loose by leaving ?

The possibility to ask questions ? Only 1 question has been answered sensibly.

The possibility to answer questions ? I will not do that anymore because it really has become too difficult.

The reputation that I am this person at stackoverflow ? Doesn’t mean a thing either except if you believe that a large number makes you a better programmer.

I realize that some people might like to play the S.O. game and that is fine. It is just not for me anymore.

A reading of a Theano compiled graph

I’m trying to understand a compiled theano function, which I printed with theano.printing.debugprint. The full monster is printed below, yet it is only with a couple of lines that I have some problems.

This code first computes a random yes/no vector in node 6. Node 5 is used to create the appropriate shape (resembling x)

Gemm{inplace} [id A] <TensorType(float32, matrix)> '(dcost/dW)'   23
 |Dot22 [id B] <TensorType(float32, matrix)> ''   22
 | |InplaceDimShuffle{1,0} [id C] <TensorType(float32, matrix)> 'x_tilde.T'   12
 | | |Elemwise{Mul}[(0, 0)] [id D] <TensorType(float32, matrix)> 'x_tilde'   8
 | |   |RandomFunction{binomial}.1 [id E] <TensorType(float32, matrix)> ''   6
 | |   | |<RandomStateType> [id F] 
 | |   | |MakeVector{dtype='int64'} [id G] <TensorType(int64, vector)> ''   5
 | |   | | |Shape_i{0} [id H] <TensorType(int64, scalar)> ''   1
 | |   | | | |x [id I] <TensorType(float32, matrix)>
 | |   | | |Shape_i{1} [id J] <TensorType(int64, scalar)> ''   0
 | |   | |   |x [id I] <TensorType(float32, matrix)>
 | |   | |TensorConstant{1} [id K] <TensorType(int8, scalar)>
 | |   | |TensorConstant{0.75} [id L] <TensorType(float32, scalar)>
 | |   |x [id I] <TensorType(float32, matrix)>
 | |Elemwise{Composite{((i0 - i1) * i2 * i1)}}[(0, 2)] [id M] <TensorType(float32, matrix)> ''   21
 |   |TensorConstant{(1, 1) of 1.0} [id N] <TensorType(float32, (True, True))>
 |   |Elemwise{Composite{scalar_sigmoid((i0 + i1))}}[(0, 0)] [id O] <TensorType(float32, matrix)> 'reduced'   15
 |   | |Dot22 [id P] <TensorType(float32, matrix)> ''   11
 |   | | |Elemwise{Mul}[(0, 0)] [id D] <TensorType(float32, matrix)> 'x_tilde'   8
 |   | | |W [id Q] <TensorType(float32, matrix)>
 |   | |InplaceDimShuffle{x,0} [id R] <TensorType(float32, row)> ''   2
 |   |   |B [id S] <TensorType(float32, vector)>
 |   |Dot22 [id T] <TensorType(float32, matrix)> '(dcost/dreduced)'   20
 |     |Elemwise{Composite{((i0 * (i1 - Composite{scalar_sigmoid((i0 + i1))}(i2, i3)) * Composite{scalar_sigmoid((i0 + i1))}(i2, i3) * (i4 - Composite{scalar_sigmoid((i0 + i1))}(i2, i3))) / i5)}}[(0, 2)] [id U] <TensorType(float32, matrix)> ''   18
 |     | |TensorConstant{(1, 1) of -2.0} [id V] <TensorType(float32, (True, True))>
 |     | |x [id I] <TensorType(float32, matrix)>
 |     | |Dot22 [id W] <TensorType(float32, matrix)> ''   17
 |     | | |Elemwise{Composite{scalar_sigmoid((i0 + i1))}}[(0, 0)] [id O] <TensorType(float32, matrix)> 'reduced'   15
 |     | | |InplaceDimShuffle{1,0} [id X] <TensorType(float32, matrix)> 'W.T'   3
 |     | |   |W [id Q] <TensorType(float32, matrix)>
 |     | |InplaceDimShuffle{x,0} [id Y] <TensorType(float32, row)> ''   4
 |     | | |B_Prime [id Z] <TensorType(float32, vector)>
 |     | |TensorConstant{(1, 1) of 1.0} [id N] <TensorType(float32, (True, True))>
 |     | |Elemwise{mul,no_inplace} [id BA] <TensorType(float32, (True, True))> ''   16
 |     |   |InplaceDimShuffle{x,x} [id BB] <TensorType(float32, (True, True))> ''   14
 |     |   | |Subtensor{int64} [id BC] <TensorType(float32, scalar)> ''   10
 |     |   |   |Elemwise{Cast{float32}} [id BD] <TensorType(float32, vector)> ''   7
 |     |   |   | |MakeVector{dtype='int64'} [id G] <TensorType(int64, vector)> ''   5
 |     |   |   |Constant{1} [id BE] 
 |     |   |InplaceDimShuffle{x,x} [id BF] <TensorType(float32, (True, True))> ''   13
 |     |     |Subtensor{int64} [id BG] <TensorType(float32, scalar)> ''   9
 |     |       |Elemwise{Cast{float32}} [id BD] <TensorType(float32, vector)> ''   7
 |     |       |Constant{0} [id BH] 
 |     |W [id Q] <TensorType(float32, matrix)>
 |TensorConstant{1.0} [id BI] <TensorType(float32, scalar)>
 |InplaceDimShuffle{1,0} [id BJ] <TensorType(float32, matrix)> ''   19
 | |Elemwise{Composite{((i0 * (i1 - Composite{scalar_sigmoid((i0 + i1))}(i2, i3)) * Composite{scalar_sigmoid((i0 + i1))}(i2, i3) * (i4 - Composite{scalar_sigmoid((i0 + i1))}(i2, i3))) / i5)}}[(0, 2)] [id U] <TensorType(float32, matrix)> ''   18
 |Elemwise{Composite{scalar_sigmoid((i0 + i1))}}[(0, 0)] [id O] <TensorType(float32, matrix)> 'reduced'   15
 |TensorConstant{1.0} [id BI] <TensorType(float32, scalar)>
RandomFunction{binomial}.0 [id E]  ''   6

Then at the second marked bold section, we see that node 16 performs the multiplication of two subnodes 14 and 13; which both are very similar except for a constant 0 or 1.

The code for node 14 reuses the same vector as node 5, but then casts it to float32 (which is node 7). And then the magic, the thing I do not understand happens. From that casted vector, a subtensor with constant 0 is selected. What does this do ?

The questions:

  1. Does the SubTensor (node 10 or node 9) select the first element of the tensor from node 7 ?
  2. Node 7 is merely a casted version of node 5. Does that vector contain the random data generated in node 6 ?
  3. Once the subtensors of node 7 are selected they are allowed to broadcast (node 13 and 14) to finally be multiplied with each other in node 16. Is it correct to say that node 16 thus computes the multiplication of the first random element with the second random element (from a random vector that might be quiet somewhat larger) ?
  4. When I print out the types of the nodes, we see that the output of the subtensor is indeed a scalar (as expected), yet the type of InPlaceDimensionShuffle and the ElementWise{mul} is (True,True). What for type is that ?
  5. If ElementWise{Mul} is not specifying ‘inplace’, (as happens in node 6), which of the two children is then modified ? Is it the node associated with the RandomFunction (thus node5) or does the randomFunction (node 6) provide us with another copy that can be modified ?

The image graph of the above function is given below. The yellow box is the one that confuses me because, if I read that right its output is merely a broadcastable 0.


After a day bashing my head against this nonsense I fugerd out the following:

Makevector in node 5 uses the shape as input parameters and concatenates them into a small vector (See

Thus its output is a vector with two values: the dimensions of x. The random generator then uses that as input to generate a correctly sized vector. And the yellow box in the computation merely multiplies the two dimensions with each other to calculate the element-count of the input. Answering each of the subquestions:

  1. yes the subtensor selects respectively the 0th and 1st element of the intput vector.
  2. node 7, does indeed contain the casted data from node 5. However node 5 does not contain the random data.
  3. it is wrong to say that node 16 computes the multiple of the two first random values. Right is: node 16 computes the multiplication of the dimension sizes of the input vector x.
  4. The (True,True) type merely tells us the broadcasting pattern. See section
  5. Without inplace an elementwise multiplication destroys one of its inputs. The inputs that are destroyed are marked in red. documentation on color coding

Java join() slow

And there I was thinking that Java would offer an efficient implementation of join on threads. One that would be triggered when a thread signals it had come to an end.

Well… not so. join(0), which means, wait indefintely for the thread, results in a completely braindead while(isAlive()) wait() loop.

public final synchronized void join(long millis)
throws InterruptedException {
    long base = System.currentTimeMillis();
    long now = 0;

    if (millis < 0) {
        throw new IllegalArgumentException("timeout value is negative");

    if (millis == 0) {
        while (isAlive()) {
    } else {
        while (isAlive()) {
            long delay = millis - now;
            if (delay <= 0) {
            now = System.currentTimeMillis() - base;

To make this work as it should we had to wait for a threadcounter to reach 0 and let each thread lower the threadcounter and call notifyAll. A very tedious process. Why do we have a join if it doesn’t work in a performant manner ?

JavaFx: A first look

Porting BpmDj to Desktop required us to think about what platform to use. We settled on JavaFx because it has 3D support. Not that we needed that immediatelly, but it is something we might want to add to BpmDj somewhere in the far future.

At first sight JavaFx is fairly solid. The properties, their getters and setters are really well defined. Even listeners to collections are well defined. That aspect I like a lot, especially since I have been creating a similar system myself. Both serve the same purpose and seem to have come up with the same solutions. Basicaly: they did it like I would have done it.

JavaFx CSS

The rendertree is quite professional, and styleable through CSS. Yet, the CSS properties are badly defined. More than once I had to dump the scene tree in order to figure out what styles are used at what point. Below is a routine that does exactly that.

public static void dump(Node n) { dump(n, 0); }
private static void dump(Node n, int depth) {
    for (int i = 0; i < depth; i++) System.out.print("  ");
    for(PseudoClass pc :n.getPseudoClassStates())
        System.out.print(" "+pc);
    if (n instanceof Parent)
        for (Node c : ((Parent) n).getChildrenUnmodifiable())
            dump(c, depth + 1);

The order in which rules are processed is counter-intuitive for the programmer. From the docs:  a style from a user agent style sheet has lower priority than a value set from code, which has lower priority than a Scene or Parent style sheet

This means that, allthough you might have written a Tag class, extending the Label class, which changes the color of a specific entry through setBackground, it will simply have no effect. This leads to quite a lot of confusion online.  On stackoverflow there are hordes of people asking ‘how can I change this style from within my program, without using CSS’. The answers might or might not work, it all depends on whether the original poster had a stylesheet assigned to its scene or not. But even then, programatorically removing a label its styleclass (and replacing it with our own),  didn’t help either because the class still ‘inherited’ scene-style properties from its container.  And that might include an * selector.

Basically CSS is a total mess. The JavaFX Implementation makes some good choices. Well… the best given the circumstances. And of course, it is easy to criticise this, but in all honosty I would not be able to suggest a better alternative to resolve such ambiguity.  Well,  maybe I can: programmer trumps CSS. That is, any style set from within the program should be honored.


Graphs – In JavaFx jargon there is a scene graph. Mark the word graph. You would expect that one node could potentially appear at multiple places in the scene. The documentation however scales the idea of a graph somewhat down to: ‘A scene graph is a set of tree data structures where every item has zero or one parent, and each item is either a “leaf” with zero sub-items or a “branch” with zero or mor sub-items. ‘ Why don’t they call it a tree then ?

Bias – Another occasion where this happens is with the word ‘content-bias’. What the hell dudes ? If you wonder what this is: it is the orientation of the layout: horizontal or vertical ? Why not call it orientation or direction ? Because they want to feel important ?

Insets – Another nice example of this is the word ‘insets’, which stands for ‘inside offsets’. So if I want padding to a control, I have to set the insets. And it goes on like this: lots of words that are either too broad, or don’t mean what they really mean. With respect to the insets, it is even more confusing because, depending on the component type, you get -fx-border-insets, margins, padding, background-insets and -opaque-insets. I don’t mind to have the ability to tune things, but at least document them in a nice drawing what they all mean.

Percentage – a value 0.5 is a percentage 50%. So if I say: express me something as a percentage then you get values between 0 and 100. Yet, in JavaFx there seems to be some confusion on when the value ranges to 1 and when to 100. So, when you then read the documentation for the BackgroundSize constructor, with a parameter documentation as widthAsPercentage the width is to be interpreted as a percentage; then you would expect the width to be in 0..100 range. Well. No, it is between 0 and 1.

Managed – if you want to decide on the layoutparameters of a node yourself, and thus manage the node you have to set this to ‘off’.  Or bascially, you have to tell the tree that it should not be managing the node. This is a standard input-output confusion, and something that should be avoided in code.

Smooth or non-smooth ?

That is a question, and one they apparently never answered. At various places there are markers to set things smooth, or not. Yet, the render engine might or might not take this into account. To illustrate this. The beatgraphs must be aliased. We want every bar clearly marked, so we do not want smoothing when we draw the beatgraphs. Thus, by creating an Image, which is marked as ‘non-smooth’, and then drawing it onto the GraphicsContext of a canvas… results indeed in a smoothened version (go figure). Then, at other places we actually want the render engine to smoothen something. E.g: when we draw the logo in the about box, we need to downscale a fairly large image to a smaller size. Whatever collection of smooth options we set, the resulting image was never properly smoothed. To that end we actually had to write our own routine. Then there is the problem that Images have a smooth flag, which can only be set if the image is loaded from an Inputstream of sorts. When the Image is created through a WritableImage, then the smooth-flag is always off; without a possibility to actually turn it on. However, don’t mind the smooth flag of the Image. The ImageView has one of its own (why two times the same flag if it is ignored anyway). I probably can go on about this aspect of the engine. Suffice to say that this is extremely badly defined, and the rendering is platform dependent, which makes it a useles flag.

Menubars & Stages

The menubar – apparently a total mess to get that working correctly.  In particular the menubar on icewm has submenus that overlay the menubar itself because javafx substracts the window decoration from the height. More so, the submenu cannot be selected as long as the menubar underneath is still selected. On dual screens: if the application is on the wrong screen, menus simply don’t open.

Which screen ? On linux the interaction with the windowmanager causes even more  problems. When I open the application in fullscreen (through the windowmanager), and then the app opens a dialogbox, suddenly the entire app jumps to the other screen, while the dialog remains at the first screen. It is really quit wacko how javafx is not able to figure out where the screen to use, how large window decorations are and  how to deal with that properly.


Layout – refers to the problem of deciding where to place what on the screen. This is well defined throughout the tree. The inner loop is really a charming piece of software (no irony intended). When a node says it needs a new layout, that message propagates upward in the tree, marking the parent nodes with a ‘branch_dirty’, thereby in the next layout cycle, the various branches that required a new layout are processed quite efficiently.  So that part I like a lot.

Inconsistent interface – The thing I don’t like is that certain properties are shadowed or not part of a node as such. A node in itself has neither a minimum/preferred nor a maximum width. Yet it does have a width. Yet, changing the width of a node depends on the context you use it in. In some classe you have to call setFitWidth, in others you have to bend yourself in corners to actually get the thing  resized (see below), at other places, setting the width with setWidth, will not do a thing, there you need to use setPrefWidth. And so on. It is always interesting to try to figure out how you will actually get a node to size correctly.

Non resizable canvases – Then there is the trick with the non-resizable canvas.  If you dig into the code you find that the class hierarchy contains multiple ‘height’ properties at different levels of the tree; all meaning slightly different things. You also figure out that the ‘resize’ operation of a node is simply not implemented, and will not resize a thing. It is quite remarkable how difficult it is to get a Canvas to rescale properly.

This even extends to the problem of figuring out what the width of a node then actually is. In control nodes (most stylable nodes inherit from this one, except video, image and canvases), you cannot ask the heightProperty to tell you something useful, certainly not when you are drawing the nodes content. There you have to use getWidth, but not if you are in a Canvas, then you cannot simply use that because the width shadows the width of the node. At other places you need to ask for the preferredWidth (prefWidth), which will tell you something more useful.

Porbably, the reason the interface is so inconsistent lies in the growth of JavaFx. Initially, a node only needs to know its size. It does not need any information about its minimum/preferred/maximum sizes. That was left for more advance layout approaches. So, a new layer was slapped on it, with a similar interface, but this time it is presented as high level (bean) properties.  Yet that is too slow, so a cache layer was added to it. Again with similar names. In the end, as an outsider, it is kinda hard to figure out what actually is going on. And even the theoretical possibilities often do not work out.

The always growing layout – In JavaFx there is a mismatch between what most users think would happen and what the program actually does. Nobody even bothers anymore to figure out what the rules now actually are. This is in other words a failure of the software design and is very reminiscent of the ‘good old AWT classes’. If you cannot communicate clearly how it is supposed to work, then how do you expect people to make the fullest use of it ?

One good  example of this is the always growing layout.  The layout mechanism has the possibility to bind the width of 1 element to the width of it sparent. However, often this leads to a loop in which the dialog keeps on growing. Neverthless, on stackoverflow you have often people simply stating: ‘just bind it like this’. They should really try that ‘solution’ in real life applications. Most of the time it will not do what you want.


Performance – The render engine – a very nice step back from object orientation. Objects are used to decide what to render when, yet the render engine itself is dissociated from it. It merely receives a set of commands that it will execute. On 3D accelerated cards this goes very fast. This aspect is worth applauding, allthough not uncommon in the industry. Android, Qt, all went this direction. Yet, in many frameworks, it is difficult to draw things sufficiently fast. Also JavaFx has problem with this.  A special purpose GraphicalContext, which functions as a canvas/bitmap that can be used to create a texture before it goes to the underlying engine.  Typically operations on such bitmaps are slow and the resulting framerate drops dramatically when even a moderate amount of objects are drawn. Compared to android, the JavaFx direct draw pipeline is very slow. When only using image-based components, the performance is acceptable.

Repaints are automatic, in theory – After layout, the content of nodes nmust be rendered. That is a tricky operation. Most routines are somewhere in native code and if you want to draw your own image you might need to use a Canvas. Fair enough, I already talked about this.

However, sometimes an update to a node does not trigger a render cycle. Particularly listviews and their listcels are susceptable to this. And there is no sensible way to demand a repaint. People have been using a trick to force things to render: ask the window to make a snapshot of its content as a 1×1 pixel. That triggers the dirty render flags of all nodes. Honostly: is that the best we can do in 2016 ? Take a screenshot of the entire app so that it will actually render ? Interestingly, that routine itself eventually states

// we need to mark the entire scene as dirty
// because dirty logic is buggy
if (scene != null && scene.impl_peer != null)

Thus instead of solving the underlying problem, the developers worked around it. That does not give me a warm fuzzy feeling.

Did this cause problems ? Of course, otherwise I would not have noticed. In practice,  listview cel updates would effectively screw up an entire path to the root of update requests. Not only was the listcel not updated, its parents weren’t either. To solve that we had to write code that would actually ‘change’ the content of a textfield, before setting it to what we wanted. So in the end you get code like

String currentText=tempo.getText();
if (Objects.equal(currentText,info.tempo))
  if (currentText==null)
    tempo.setText("bla bla bla");

Yes that was ‘entertaining’, and totally useles. The back and forth was necessary to actually get JavaFx render our cells.

Sometimes things just refuse to render – to prohibit a node from drawing outside its area, in particular our beatgraph scroller, we have to set a clipping mask. That is done as follows:

a.rect(0, 0, getWidth(), beatgraphHeight);

Ocasionaly this would not work and no rendering would take place at all. After many hours of digging through code and testing we found that when the beatgraphHeight was a non-integer value, the clip was simply not performed on the Graphics2DContext. What we had to do to get this working is write

a.rect(0, 0, getWidth(), Math.round(beatgraphHeight));

Whether this was solely related to my linux install or not I couldn’t tell. It was nonetheless a bug that I would typically associate with AWT.

When I followed the internals of the rectangle call you see that it transforms all double coordinates to float, which is another issue one could have with JavaFx. Internally it is all float, yet the API presents everything as doubles. It would be a lot faster to keep everything in the same bitlevel. Not only that you would not have typical java-style float to double convertion errors.

Internal images too large – when we drew the beatgraph overview at the top of the app, everything worked fine… Until we added

g.setEffect(prelistening ? gray : null);

In order to render this effect, even the parts it would not render in the end, it sometimes had to allocate an image larger than 16384 pixels. That was because we drew a line at x-position 5000 position, yet because that position was not visible anyway, it didn’t matter, we thought.
Nevertheless, the effect rendering demanded an internal image large enough to store the full range of what we were drawing, to then make everything grey. From a general purpose point of view, this is somewhat expected bcause pixels outside the visible region could potentially affect pixels within the visible region. Yet, it would be useful if they would make some sensible estimates that reflect the reality of a color transformation (that is: stick to the sizes you got). Even in case of blur filters, adding a margin that reflects the size of the blur radius would suffice to properly render the visible region. Also, in this case it was nearly impossible to understand what went on becaue the rendering takes place in its own thread. First we tried to figure out where we allocated such large image; couldn’t find any. Then we tried to set breakpoints; and finally we were lucky that this part of the rendering was actually allocated in non-native code; so we could understand what went on.

Object orientation ?

Internal Datastructures – It is here that I loose my respect for the developers. The thing with data structures is the following: how many times you hear people say: don’t use arrays (even for sensible purposes),  always use ADT’s such as ArrayList, Tree etc.. Yet… time and time again I find that the java developers hate their own data structures. In 1998 I figured that their runtime stack was not really a stack but an array that was copied when it grew too large. The result: if the stack became larger, the program became slower. Now, in JavaFx, exactly the same problem. If the eventqueue of javafx becomes larger, then the program slows down. This makes no sense. At all.

And then there is the problem of the selectionmodel that is simply stored as a straight linked list, without any option to quickly find an element. Of course, when you add something to the list, the listview will automatically try to find the current selection back in the list. And here it is that the performance penalty really hits. If you batch insert 27000 elements, even then for each of those elements, the selection is recovered, meaning that we have an O(n²) performance. That is really bad, because a HashMap would already have helped with finding the currently selected items; or when a batch insert takes place, only recover the current selection after really inserting all elements. In our case, having 27000 songs in a list, with 1 song selected would lead to batch insertion times of about 5 minutes. Yes that is no laughing matter. That is a fatal performance penalty.

Because of such crappy datastructures, you often have to twist the surrounding program in corners. In the end, after we understood what went on, we removed the selection, updated the list, and then recovered the selection ourselves.

The case of the swallowed exceptions – Sometimes you just get an ‘Exception in Application start’. Which is actually not true. It is not in the application start that it takes place. If it were you would be able to write a catch in that start method. No it is an exception somewhere in the event queue handling while the start method has not yet finished (which is pretty much the entire program). To debug that, the easiest way is to go to LauncherImpl and set a breakpoint at the line that tells you there is an exception. Then debug it and when it crashes you can actually see the exception. And hope the debugger is smart enough to find the exception back in the compiled code.

Swallowing exceptions just like that is not only bad framework design, it is an absolute horrible thing to do to users of your framework. Now they can really go onto an egghunt to find that non specicifc ‘Exception in Application start’.

The singular JavaFx Application – only one JavaFx application  can be launched per virtual machine. That’s annoying and breaks any decent OO design. It introduces a global state that really not necessary. It was for instance impossible to create a loader that would download a bpmdj.jar and then jump into the JavaFx launch of the app. Jumping into that launch was not possible because the loader had already instantiated its JavaFX application. The easiest way to solve this was actually bypass the entire LaunchImpl class. Thereby we got rid of the swallowed exceptions and we were able to open multiple application stages without any problem.

One or more listeners – In JavaFx there is a certain ambiguity between having properties that can have multiple listeners and events to which only one listener can be attached. An example: attaching multipled listeners to the hiding of a stage is not possible, which is a bit weird because changes to the width can have multiple listeners.

Things I miss from Android

Event time information – Q: when was an event placed in the queue ? A: Impossible to know And that is annoying if you write a real time application that must know when a key was pressed or a mouse was clicked.

A timed runLater – sometimes you want things to be done a bit later. I find it a waste of resources to spawn a thread for that, have that thread wait 3 seonds and then post a runLater message to the event queue to actually do its work. It would be a lot easier if there would be a Platform.runLater(<runnable>, <delay>) call.


From what I’ve seen the javafx framework is a better designed framework in an early stage. Compared to the AWT many painful hooks have been removed and from various design choices, often the most powerful with the least amount of code has been choosen.

A good example of this are the clipping regions. In JavaFx, each node can be clipped, but by default isn’t. This is useful when nodes animate beyond their parents boundaries. Yet, initially you have the feeling that you do nothing else but set clips. Until you realize that it has the advantage that you can set one clip for an entire branch without requiring each branch to do its own clipping. Efficiency & flexibility in one neat package. And that bascially summarizes JavaFx.

Sometimes the OO design is broken; and I hope this will not become a trend.  At other places we noticed standard growing pains of a platform that is being developed.










Resizable JavaFx Canvas

To make a JavaFx canvas resizable all that needs to be done is override the min/pref/max methods. Make it resizable and implement the resize method.

With this method no width/height listeners are necessary to trigger a redraw. It is also no longer necessary to bind the size of the width and height to the container.

public class ResizableCanvas extends Canvas {

    public double minHeight(double width)
        return 64;

    public double maxHeight(double width)
        return 1000;

    public double prefHeight(double width)
        return minHeight(width);

    public double minWidth(double height)
        return 0;

    public double maxWidth(double height)
        return 10000;

    public boolean isResizable()
        return true;

    public void resize(double width, double height)

Note that the resize method cannot simply call Node.resize(width,height), because the standard implementation is effectivele empty.

IAB startSetup NullPointerException

Another fine example of Googles ‘write once, let others fix our problems’. Even at this moment (June 2014) the ‘patch’ is still not in the released sdk. To solve the problem replace

 if (!mContext.getPackageManager().queryIntentServices(serviceIntent, 0).isEmpty()) {


  PackageManager pm=mContext.getPackageManager();
  List<ResolveInfo> intentServices = pm.queryIntentServices(serviceIntent, 0);
  if (intentServices != null && !intentServices.isEmpty())

Then in the dispose code


  if (mContext != null) mContext.unbindService(mServiceConn);


  if (mContext != null && mService!=null) mContext.unbindService(mServiceConn);

I hope this helps.

The Symbian/Qt/Nokia platform. A total nightmare.

In June/July 2011 I had the misfortune to program for a Symbian/Nokia phone. Really bad experience. Below  a list of all the shit I had to go through to get QtCreator working somewhat.

  • crappy error messages. ‘a data abort exception has occured’
  • qml is a complete paradigm shift
  • unstable operating system. ‘it might be in an unstable state if you interrupt the working of this program’
  • version control unfriendly. Often a file added by somebody else must be readded in the repository which leads to duplicate entries in the project file.
  • memory mapping of large files doesnt work
  • memory requirements are too stringent
  • debugger doesn’t work. One always gets memorydumps without access to the symbols.
  • bloody slow compilation
  • the X7 Symbian 3 phone is slow.
  • Try to get the audiostream from a decoder. With phonon it should be possible but it doesnt work.
  • just click on the ‘documentation’ tab left. You get an error ‘Die Seite konnte nicht gefunden werden ‘qthelp://
  • sometimes code gets hidden and cannot be unhidden unless the project is opened and closed.
  • how can one create a subfolder ?
  • copying files to and from the phone more often than not does not work.
  • try to use phonon or qtmultimedia to access the decoded part of a video !
  • searching in all project files simply doesn’t work.
  • qt creator often seems to require compilation twice.  ‘build’ and then ‘run’, leads to a rebuild.
  • no linux, nor vmware. You must program in windows
  • QtCreator crashes when there are more than 4000 error messages from the compiler.
  • sometimes when adding a new file, it insists to add it somewhere in a subdirectory.
  • The arm processor in those phones is not fully functional. For instance the ‘smull’ and ‘adc’ instructions are not supported. Mainly because it is based on a too old model of the processor.
  • ‘piss-poor’ headphone volume; something many buyers noted as well
  • Filenames (without their paths) are used to identify files. So the same filename at twop different locations is treated by the make system as the same file.
  • How to set the volume of th phone ?  Yes a truly horendous hack by obtaining the address of a undocumented function and then hoping that it will actually point to the right thing !
  • No support from Nokia what_so_ever. Even if you are hired by them (which I inderectly was !)

I was so happy that I could stop programming for/on this utter swamp of a platform.

How to know whether a copy-on-write page is an actual copy ?

When I create a copy-on-write mapping (a MAP_PRIVATE) using mmap, then some pages of this mapping will be copied as soon as I write to specific addresses. At a certain point in my program I would like to figure out which pages have actually been copied. There is a call, called ‘mincore’, but that only reports whether the page is in memory or not, which is not the same as the page being copied or not.

In order to figure out what pages have been copied, the pagemap and kpageflags can be used. Below a quick test to check whether a page is in memory ‘SWAPBACKED’ as it is called. One problem remains of course, which is the problem that kpageflags is only accessible to the root.

int main(int argc, char* argv[])
  unsigned long long pagesize=getpagesize();
  int pagecount=4;
  int filesize=pagesize*pagecount;
  int fd=open("test.dat", O_RDWR);
  if (fd<=0)
      fd=open("test.dat", O_CREAT|O_RDWR,S_IRUSR|S_IWUSR);
      printf("Created test.dat testfile\n");
  int err=ftruncate(fd,filesize);

  char* M=(char*)mmap(NULL, filesize, PROT_READ|PROT_WRITE, MAP_PRIVATE,fd,0);
  printf("Successfully create private mapping\n");

The test setup contains 4 pages. page 0 and 2 are dirty

  strcpy(M,"I feel so dirty\n");
  strcpy(M+pagesize*2,"Christ on crutches\n");

page 3 has been read from.

  char t=M[pagesize*3];

page 1 will not be accessed

The pagemap file maps the process its virtual memory to actual pages, which can then be retrieved from the global kpageflags file later on. Read the file /usr/src/linux/Documentation/vm/pagemap.txt

  int mapfd=open("/proc/self/pagemap",O_RDONLY);
  unsigned long long target=((unsigned long)(void*)M)/pagesize;
  err=lseek64(mapfd, target*8, SEEK_SET);
  assert(sizeof(long long)==8);

Here we read the page frame numbers for each of our virtual pages

  unsigned long long page2pfn[pagecount];
  err=read(mapfd,page2pfn,sizeof(long long)*pagecount);
  if (err<0)
    perror("Reading pagemap");
    printf("Could only read %d bytes\n",err);

Now we are about to read for each virtual frame, the actual pageflags

  int pageflags=open("/proc/kpageflags",O_RDONLY);
  for(int i = 0 ; i < pagecount; i++)
      unsigned long long v2a=page2pfn[i];
      printf("Page: %d, flag %llx\n",i,page2pfn[i]);

      if(v2a&0x8000000000000000LL) // Is the virtual page present ?
        unsigned long long pfn=v2a&0x3fffffffffffffLL;
        unsigned long long pf;
        printf("pageflags are %llx with SWAPBACKED: %d\n",pf,(pf>>14)&1);

All in all, I’m not particularly happy with this approach since it requires access to a file that we in general can’t access and it is bloody complicated (how about a simple kernel call to retrieve the pageflags ?).

Converting a Photo to a Sketch

Original Photo

Value Propagation

The first step we perform is to propagate dark values. This can be found in the Gimp menu Filters | Distorts | Value Propagate. The settings are shown below

The total image after this step has a more consistent value distribution throughout the writing. This is better visible if we look in detail at the text.

Without dark value propagation with dark value propagation


Edge detection

The next step is an edge detection step based on the difference of Gaussians. This edge detection step will effectively make the background (which is currenltky still a bit grey) white. This is done with Gimp in Filters | Edge Detect | Difference of Gaussians. The settings are shown below

The effect on our example is shown below

An obvious effect of this operation is that the image becomes lighter in general

Light Intensitity

To solve this problem we apply an histogram normalization. In gimp this is in Colors | Auto | Normalize.  The result:

Removing the speckle

The remaining speckle can be removed with a selective gaussian blur. Filters | Blur | Selective Gaussian Blur.

Beware that this operation can take quite some time and might only be necessary when there is some anyoying speckle present.

Crop, rotate and perspective

The last step that can be performed is a rotation / cropping to align the image properly. The results are shown below, comparing the original image against the transformed image and then the pre-scaled image for your browser.

The prescaled image looks like