Category Archives: Programming

Shortest Distance from a Point to a Line segment

A straight translation from python to java; taken from

public double distance2(Point2D pnt)
    double lineVecDx=to.x-from.x;
    double lineVecDy=to.y-from.y;
    double pntVecDx=pnt.getX()-from.x;
    double pntVecDy=pnt.getY()-from.y;
    double lineLen=dist(lineVecDx,lineVecDy);
    double pntVecLength=dist(lineVecDx,lineVecDy);
    double lineUnitvecDx=lineVecDx/pntVecLength;
    double lineUnitvecDy=lineVecDy/pntVecLength;
    double pntVecScaledDx=pntVecDx/lineLen;
    double pntVecScaledDy=pntVecDy/lineLen;
    double tx = lineUnitvecDx * pntVecScaledDx;
    double ty = lineUnitvecDy * pntVecScaledDy;
    double t = tx+ty;
    if (t<0) t=0;
    else if (t>1) t=1;
    double nearestX=lineVecDx*t;
    double nearestY=lineVecDy*t;
    return dist(nearestX-pntVecDx, nearestY-pntVecDy);

double dist(double dx, double dy)
    return Math.sqrt(dx*dx+dy*dy);

A replacement for the JavaFx FontMetrics class for JDK9

Most people used an internal javafx FontMetrics class, which has been deprecated in version 9 of the jdk. That means that your app that relied on this will simply not work anymore. Below is a simple replacement that will provide the computeStringWidth as well as ascent, descent and lineHeight. The produiced values are exactly the same as if they were called from the FontMetrics class itselve.

import javafx.geometry.Bounds;
import javafx.scene.text.Font;
import javafx.scene.text.Text;
public class FontMetrics
 final private Text internal;
 public float ascent, descent, lineHeight;
 public FontMetrics(Font fnt)
 internal =new Text();
 Bounds b= internal.getLayoutBounds();
 lineHeight= (float) b.getHeight();
 ascent= (float) -b.getMinY();
 descent=(float) b.getMaxY();

 public float computeStringWidth(String txt)
 return (float) internal.getLayoutBounds().getWidth();

JavaFx 8 command line options

I have been struggling with an “interesting” javafx render problem. In BpmDj, given enough updates, the rendertree would partially stop updating. It was clear this was a concurrency bug, yet one that was not my fault. As a preliminary solution to fix this I would grab a screenshot everytime I would add a node. Recently however that solution no longer worked and the dirty logic would be so fucked up that elements which should be visible were not visible. It was even possible to have a font change its color halfway through its rendering.

So I started to look around for javafx options, and there are a few of them that proved to be useful.

What Flag Default
VSync prism.vsync true
Dirty region optimizations prism.dirtyopts true
Occlusion Culling prism.occlusion.culling true
dirtyRegionCount prism.dirtyregioncount 15
Scrolling cache optimization prism.scrollcacheopt false
Dirty region optimizations prism.threadcheck false
Draws overlay rectangles showing where the dirty regions were prism.showdirty false
Draws overlay rectangles showing not only the dirty regions, but how many times each area within that dirty region was drawn (covered by bounds of a drawn object). prism.showoverdraw false
Prints out the render graph, annotated with dirty opts information prism.printrendergraph false
Force scene repaint on every frame prism.forcerepaint false
disable fallback to another toolkit if prism couldn’t be init-ed prism.noFallback false
Shape caching optimizations prism.cacheshapes complex
New javafx-iio image loader prism.newiio true
Verbose output prism.verbose false
Prism statistics print frequency, <=0 means “do not print” prism.printStats 0
Debug output prism.debug false
Trace output prism.trace false
Print texture allocation data prism.printallocs” false
Disable bad driver check warning prism.disableBadDriverWarning” false
Force GPU, if GPU is PS 3 capable, disable GPU qualification check. prism.forceGPU false
Skip mesh normal computation prism.experimental.skipMeshNormalComputation false
Which driver to use prism.order
prism.forcepowerof2 false
prism.noclamptozero false
Try -Dprism.maxvram=[kKmMgG] prism.allowhidpi true
prism.maxvram 512 * 1024 * 1024
Try -Dprism.targetvram=[kKmMgG]|<double(0,100)>% prism.targetvram
prism.poolstats false
prism.pooldebug false
prism.disableD3D9Ex false
prism.disableEffects false
prism.glyphCacheWidth 1024
prism.glyphCacheHeight 1024
Enable the performance logger, print on exit, print on first paint etc. sun.perflog
prism.supershader true
Force uploading painter (e.g., to avoid Linux live-resize jittering) prism.forceUploadingPainter false
Force the use of fragment shader that does alpha testing (i.e. discard if alpha == 0.0) prism.forceAlphaTestShader false
Force non anti-aliasing (not smooth) shape rendering prism.forceNonAntialiasedShape false
Set Single GUI Threading quantum.singlethreaded false
Print quantum verbose quantum.verbose false
JavaFx framerate in FPS javafx.animation.pulse 60

FX thread collision with Render thread ?

When running with the prism.threadcheck option I got the following error:

ERROR: PrismPen / FX threads co-running: DIRTY: false
FX: java.lang.Thread.getStackTrace(
FX: javafx.scene.Scene$MouseHandler.process(
FX: javafx.scene.Scene$MouseHandler.access$1500(
FX: javafx.scene.Scene$MouseHandler$
FX: com.sun.javafx.application.PlatformImpl.lambda$null$173(
FX: Method)
FX: com.sun.javafx.application.PlatformImpl.lambda$runLater$174(
FX: Method)
QR: java.util.concurrent.Executors$
QR: java.util.concurrent.FutureTask.runAndReset(
QR: java.util.concurrent.ThreadPoolExecutor.runWorker(
QR: java.util.concurrent.ThreadPoolExecutor$

Of course how that related to my problem wasn’t clear, allthough maybe it could explain why the dirty flags were all ****ed up.

Overdraw and dirty draws


The option -Dprism.showoverdraw=true is interesting because it shows us which area we drew once (red), which twice (green), and so on. This showed me that the areas that didn’t update were not even marked as ‘dirty’. Suggesting that the problem was indeed with the setting and tracking of the dirty flag in the renderer. Therefore we tried the option -Dprism.dirtyopts=false and hooray all problems disappeared. Of course the app ran slightly slower, but at least it worked now.

Java ReentrantReadWriteLock oddities

When I was hit with startvation of some of the update threads in BpmDj, I was a bit puzzled. After all, I did use a ReentrantReadWriteLock in fair mode. A simple profiling showed that certain transaction where substantially more heavy (a writelock taking lets’ say 10 seconds), while the databasereads would merely take 1 second.

From that I concluded that because the writelock was held longer, other threads did not have the opportunity to have a fair amount of locktime themselves. E.g: the write lock is released, the longest waiting readlock is granted: that transaction is done within a second, and the writelock is granted again. And is not released for another 10 seconds.

To test this I set up a program to create 10 reader threads and 1 writer thread. Each thread would acquire a lock, wait some time (to simulate the ‘work’ done in the locked section) and then release the lock. This would be performed in a loop for about 10 seconds. Afterwards, we could measure how much time within the lock was spend for each thread and compare that with the amount of work the thread wanted to perform.

These were the results:

Unfair Lock

The unfair lock behaved, as expected, fairly unfair. If the writer had 10 times less work than the readerthreads, its locktime would be 8 times higher. If the writer had the same amount of work, it would have 16 times more locktime and if the writer had 10 times more work, then it would be granted 50 times more locktime.

/10.0 => *7.944541604031417
1.0 => *15.917042652441687
*10.0 => *50.5366207048361

A fair lock

When we created the read/write lock in a fair fashion, the results were more in line with what we would expect:

/10.0 => /8.810361366979999
1.0 => /1.0021791947397343
*10.0 => *9.009623837637745

That is, when the worker has 10 times less work than the readers, it has 8 times less locktime. If it has the same amount of work, it receives the same amount of locktime and if it has 10 times more work, then it is granted 9 times more locktime.

This is completely as expected, yet not something we might want, because it allows heavy tasks to block the lighter tasks.

A fair lock, prefixed with tryLock()

tryLock allows an app to check for a lock, and if it is not granted the lock to continue with something else. There are two tryLock variants. The first without parameters (tryLock()), the second with a timeout.

Trylock() screws up any scheduling that might have been in place and just barges in on anything the algorithm might be planning to do.

/10.0 => /925.0224470413133
1.0 => /99.7994977890176
*10.0 => /11.036514077119893

In this scenario, the writer thread pretty much does not get anything done. Whether it is performing 10 times less or 10 times more work, its locktime ranges from ~ /1000 to /10. This is very bad, because you might expect the tryLock to make the lock unfair, yet the results of an unfair lock (see above) are completely the opposite.

A fair lock, prefixed with tryLock(timeout)

There is a second variant of tryLock: one with a timeout; which can indeed be 0. If we apply that, we get the following results:
/10.0 /8.949334569534225
1.0 /1.0091078687281538
10.0 *9.016406673440223

which is in line with the straightforward fair lock.

In BpmDj, we used the tryLock() instead of tryLock(0), assuming that a timeOut of 0 would result in the same behavior between them.

Cache coherence and Java

You all know my good old friend Dough Lea ? A moron who instead of putting any sensible locking strategy into Java has been putting the most nonsensical thing in it ? No ? Well, let’s dig into his latest bullshit. Somewhere along the lines he decided that cache coherence is really something you want the programmer to care about. gives a demonstration of the thing most people would expect to work that does not work. Basically, if you have a synchronized write to a variable, the (non synchronized) reads from other threads can retrieve a stale state version from their cache. That means, that if you have to synchronize to write a variable, you likely have to synchronize all reads as well. If you don’t want to do that, or if you cannot do that, then you should  use the volatile keyword.

That means two things:

  1. synchronizing treemaps/hashmaps through a facade basically means also locking on read accesses. Huge performance penalty.
  2. if you have an executor array of threads that will work their ways through an ordered set of thunks to be executed, then each of these thunks has to have all their variables marked as volatile. Otherwise no cache coherence can be guaranteed and a thunk that would work on thread 1, might just not work when executed on thread 2.

It really hurts me to have to think through bullshit like this. You expect your cache to be coherent. That’s it. If it is not then why the hell am I using a virtual machine ? I could as well start programming kernels.

Android storage permissions

I’m an overall unhappy person. What else would you expect if you have to navigate that excuse for an OS, which is called Android.
This time I’m talking Android Storage Permissions.
With each new release they redefine the rules completely. So whatever BpmDj wrote in the past on your device in a _persistent_ storage will no longer be persistent in newer versions. Even the idea that, when your app requests file write permissions, it should actually have them, seems far fetched. No, before you create a directory you have to ask permission to the OS of doing so. Thereby you have to notify the user (depending on the android version) and request the permission at runtime (why do they have a manifest then anyway). Of course you are not allowed to ask for that permission in the application its main routine. No no no no no. You can only ask that permission when the first user interface element is shown. Until then you can gloriously fuck off with that nice idea of creating the necessary directories and database for your app.
Even worse, the user can withdraw that permission at any moment during the run of your application. That means you have to check EVERY BLOODY INSULTING file operation.
Now, you think ‘checking permissions’ is an easy thing to do. Also there you are completely wrong. It can only be done in an asynchronous manner. Yes. Fuck that nice function you had.
The correct answer at gives an idea on how much retardation is exactly involved.

Why I left Stack Overflow

I decided to leave stack overflow after about 6 years. Two realizations lead to this decision.

A – moderators and editors are much more valued than people who actually create content. For every question I tried to ask, or tried to answer, I have been talking more with moderators than I actually spend time writing it. Just a grab from my experience,

  • you write a sensible answer, some mega moderator just deletes it because he believes it is link spam. After sending an email that it is not link spam the post gets reinstated.
  • a shitload of stupid questions coming in. Bad grammar, unclear what they are asking and so on. With a signal/noise level of about -24 dB it is very very bad.
  • there are so many disrespectful people who actually demand that you look at their question. “It has to be a completely self standing solution before I will accept the answer” is not uncommon at the site.
  • but if you yourself ask a question, it is downvoted right away, without any explanation.
  • people who game the system: You answer someones question, someone comments on your answer and adds a detail. Then that same person takes your answer, reposts it 2 minutes later, and claims the reward.
  • people who think that downvoting an answer to a question they don’t like is the correct thing to do
  • people who cannot read a question and keep insisting that they cannot give legal advice. This resulted in a of close/reopen/close/reopen round for that particular question.
  • idiots who place a bounty on their question, yet after you answer them they do not award the bounty.
  • yet stack overflow that will not return the bounty to the original poster if not awarded.
  • and of course people who are not really interested in the question. You put a bounty on a question you have. About a day before the deadline expires someone ‘answers’ the question by well formulating some general things I already knew. Clearly hoping to get more than 2 points so he can claim his non-answer.

Thus we end up in a situation where the teachers (those with high reputation) suck the livelihood out of content creators, who maybe don’t need endless discussions. By doing so, they create a new class that will tend to behave the same.

Because the above problems are systemic I decided to leave.

B- There is however a second reason. Namely, reputation is just a number, it doesn’t mean anything.  I really do not need to “play the game”.

What do I loose by leaving ?

The possibility to ask questions ? Only 1 question has been answered sensibly.

The possibility to answer questions ? I will not do that anymore because it really has become too difficult.

The reputation that I am this person at stackoverflow ? Doesn’t mean a thing either except if you believe that a large number makes you a better programmer.

I realize that some people might like to play the S.O. game and that is fine. It is just not for me anymore.

A reading of a Theano compiled graph

I’m trying to understand a compiled theano function, which I printed with theano.printing.debugprint. The full monster is printed below, yet it is only with a couple of lines that I have some problems.

This code first computes a random yes/no vector in node 6. Node 5 is used to create the appropriate shape (resembling x)

Gemm{inplace} [id A] <TensorType(float32, matrix)> '(dcost/dW)'   23
 |Dot22 [id B] <TensorType(float32, matrix)> ''   22
 | |InplaceDimShuffle{1,0} [id C] <TensorType(float32, matrix)> 'x_tilde.T'   12
 | | |Elemwise{Mul}[(0, 0)] [id D] <TensorType(float32, matrix)> 'x_tilde'   8
 | |   |RandomFunction{binomial}.1 [id E] <TensorType(float32, matrix)> ''   6
 | |   | |<RandomStateType> [id F] 
 | |   | |MakeVector{dtype='int64'} [id G] <TensorType(int64, vector)> ''   5
 | |   | | |Shape_i{0} [id H] <TensorType(int64, scalar)> ''   1
 | |   | | | |x [id I] <TensorType(float32, matrix)>
 | |   | | |Shape_i{1} [id J] <TensorType(int64, scalar)> ''   0
 | |   | |   |x [id I] <TensorType(float32, matrix)>
 | |   | |TensorConstant{1} [id K] <TensorType(int8, scalar)>
 | |   | |TensorConstant{0.75} [id L] <TensorType(float32, scalar)>
 | |   |x [id I] <TensorType(float32, matrix)>
 | |Elemwise{Composite{((i0 - i1) * i2 * i1)}}[(0, 2)] [id M] <TensorType(float32, matrix)> ''   21
 |   |TensorConstant{(1, 1) of 1.0} [id N] <TensorType(float32, (True, True))>
 |   |Elemwise{Composite{scalar_sigmoid((i0 + i1))}}[(0, 0)] [id O] <TensorType(float32, matrix)> 'reduced'   15
 |   | |Dot22 [id P] <TensorType(float32, matrix)> ''   11
 |   | | |Elemwise{Mul}[(0, 0)] [id D] <TensorType(float32, matrix)> 'x_tilde'   8
 |   | | |W [id Q] <TensorType(float32, matrix)>
 |   | |InplaceDimShuffle{x,0} [id R] <TensorType(float32, row)> ''   2
 |   |   |B [id S] <TensorType(float32, vector)>
 |   |Dot22 [id T] <TensorType(float32, matrix)> '(dcost/dreduced)'   20
 |     |Elemwise{Composite{((i0 * (i1 - Composite{scalar_sigmoid((i0 + i1))}(i2, i3)) * Composite{scalar_sigmoid((i0 + i1))}(i2, i3) * (i4 - Composite{scalar_sigmoid((i0 + i1))}(i2, i3))) / i5)}}[(0, 2)] [id U] <TensorType(float32, matrix)> ''   18
 |     | |TensorConstant{(1, 1) of -2.0} [id V] <TensorType(float32, (True, True))>
 |     | |x [id I] <TensorType(float32, matrix)>
 |     | |Dot22 [id W] <TensorType(float32, matrix)> ''   17
 |     | | |Elemwise{Composite{scalar_sigmoid((i0 + i1))}}[(0, 0)] [id O] <TensorType(float32, matrix)> 'reduced'   15
 |     | | |InplaceDimShuffle{1,0} [id X] <TensorType(float32, matrix)> 'W.T'   3
 |     | |   |W [id Q] <TensorType(float32, matrix)>
 |     | |InplaceDimShuffle{x,0} [id Y] <TensorType(float32, row)> ''   4
 |     | | |B_Prime [id Z] <TensorType(float32, vector)>
 |     | |TensorConstant{(1, 1) of 1.0} [id N] <TensorType(float32, (True, True))>
 |     | |Elemwise{mul,no_inplace} [id BA] <TensorType(float32, (True, True))> ''   16
 |     |   |InplaceDimShuffle{x,x} [id BB] <TensorType(float32, (True, True))> ''   14
 |     |   | |Subtensor{int64} [id BC] <TensorType(float32, scalar)> ''   10
 |     |   |   |Elemwise{Cast{float32}} [id BD] <TensorType(float32, vector)> ''   7
 |     |   |   | |MakeVector{dtype='int64'} [id G] <TensorType(int64, vector)> ''   5
 |     |   |   |Constant{1} [id BE] 
 |     |   |InplaceDimShuffle{x,x} [id BF] <TensorType(float32, (True, True))> ''   13
 |     |     |Subtensor{int64} [id BG] <TensorType(float32, scalar)> ''   9
 |     |       |Elemwise{Cast{float32}} [id BD] <TensorType(float32, vector)> ''   7
 |     |       |Constant{0} [id BH] 
 |     |W [id Q] <TensorType(float32, matrix)>
 |TensorConstant{1.0} [id BI] <TensorType(float32, scalar)>
 |InplaceDimShuffle{1,0} [id BJ] <TensorType(float32, matrix)> ''   19
 | |Elemwise{Composite{((i0 * (i1 - Composite{scalar_sigmoid((i0 + i1))}(i2, i3)) * Composite{scalar_sigmoid((i0 + i1))}(i2, i3) * (i4 - Composite{scalar_sigmoid((i0 + i1))}(i2, i3))) / i5)}}[(0, 2)] [id U] <TensorType(float32, matrix)> ''   18
 |Elemwise{Composite{scalar_sigmoid((i0 + i1))}}[(0, 0)] [id O] <TensorType(float32, matrix)> 'reduced'   15
 |TensorConstant{1.0} [id BI] <TensorType(float32, scalar)>
RandomFunction{binomial}.0 [id E]  ''   6

Then at the second marked bold section, we see that node 16 performs the multiplication of two subnodes 14 and 13; which both are very similar except for a constant 0 or 1.

The code for node 14 reuses the same vector as node 5, but then casts it to float32 (which is node 7). And then the magic, the thing I do not understand happens. From that casted vector, a subtensor with constant 0 is selected. What does this do ?

The questions:

  1. Does the SubTensor (node 10 or node 9) select the first element of the tensor from node 7 ?
  2. Node 7 is merely a casted version of node 5. Does that vector contain the random data generated in node 6 ?
  3. Once the subtensors of node 7 are selected they are allowed to broadcast (node 13 and 14) to finally be multiplied with each other in node 16. Is it correct to say that node 16 thus computes the multiplication of the first random element with the second random element (from a random vector that might be quiet somewhat larger) ?
  4. When I print out the types of the nodes, we see that the output of the subtensor is indeed a scalar (as expected), yet the type of InPlaceDimensionShuffle and the ElementWise{mul} is (True,True). What for type is that ?
  5. If ElementWise{Mul} is not specifying ‘inplace’, (as happens in node 6), which of the two children is then modified ? Is it the node associated with the RandomFunction (thus node5) or does the randomFunction (node 6) provide us with another copy that can be modified ?

The image graph of the above function is given below. The yellow box is the one that confuses me because, if I read that right its output is merely a broadcastable 0.


After a day bashing my head against this nonsense I fugerd out the following:

Makevector in node 5 uses the shape as input parameters and concatenates them into a small vector (See

Thus its output is a vector with two values: the dimensions of x. The random generator then uses that as input to generate a correctly sized vector. And the yellow box in the computation merely multiplies the two dimensions with each other to calculate the element-count of the input. Answering each of the subquestions:

  1. yes the subtensor selects respectively the 0th and 1st element of the intput vector.
  2. node 7, does indeed contain the casted data from node 5. However node 5 does not contain the random data.
  3. it is wrong to say that node 16 computes the multiple of the two first random values. Right is: node 16 computes the multiplication of the dimension sizes of the input vector x.
  4. The (True,True) type merely tells us the broadcasting pattern. See section
  5. Without inplace an elementwise multiplication destroys one of its inputs. The inputs that are destroyed are marked in red. documentation on color coding

Java join() slow

And there I was thinking that Java would offer an efficient implementation of join on threads. One that would be triggered when a thread signals it had come to an end.

Well… not so. join(0), which means, wait indefintely for the thread, results in a completely braindead while(isAlive()) wait() loop.

public final synchronized void join(long millis)
throws InterruptedException {
    long base = System.currentTimeMillis();
    long now = 0;

    if (millis < 0) {
        throw new IllegalArgumentException("timeout value is negative");

    if (millis == 0) {
        while (isAlive()) {
    } else {
        while (isAlive()) {
            long delay = millis - now;
            if (delay <= 0) {
            now = System.currentTimeMillis() - base;

To make this work as it should we had to wait for a threadcounter to reach 0 and let each thread lower the threadcounter and call notifyAll. A very tedious process. Why do we have a join if it doesn’t work in a performant manner ?

JavaFx: A first look

Porting BpmDj to Desktop required us to think about what platform to use. We settled on JavaFx because it has 3D support. Not that we needed that immediatelly, but it is something we might want to add to BpmDj somewhere in the far future.

At first sight JavaFx is fairly solid. The properties, their getters and setters are really well defined. Even listeners to collections are well defined. That aspect I like a lot, especially since I have been creating a similar system myself. Both serve the same purpose and seem to have come up with the same solutions. Basicaly: they did it like I would have done it.

JavaFx CSS

The rendertree is quite professional, and styleable through CSS. Yet, the CSS properties are badly defined. More than once I had to dump the scene tree in order to figure out what styles are used at what point. Below is a routine that does exactly that.

public static void dump(Node n) { dump(n, 0); }
private static void dump(Node n, int depth) {
    for (int i = 0; i < depth; i++) System.out.print("  ");
    for(PseudoClass pc :n.getPseudoClassStates())
        System.out.print(" "+pc);
    if (n instanceof Parent)
        for (Node c : ((Parent) n).getChildrenUnmodifiable())
            dump(c, depth + 1);

The order in which rules are processed is counter-intuitive for the programmer. From the docs:  a style from a user agent style sheet has lower priority than a value set from code, which has lower priority than a Scene or Parent style sheet

This means that, allthough you might have written a Tag class, extending the Label class, which changes the color of a specific entry through setBackground, it will simply have no effect. This leads to quite a lot of confusion online.  On stackoverflow there are hordes of people asking ‘how can I change this style from within my program, without using CSS’. The answers might or might not work, it all depends on whether the original poster had a stylesheet assigned to its scene or not. But even then, programatorically removing a label its styleclass (and replacing it with our own),  didn’t help either because the class still ‘inherited’ scene-style properties from its container.  And that might include an * selector.

Basically CSS is a total mess. The JavaFX Implementation makes some good choices. Well… the best given the circumstances. And of course, it is easy to criticise this, but in all honosty I would not be able to suggest a better alternative to resolve such ambiguity.  Well,  maybe I can: programmer trumps CSS. That is, any style set from within the program should be honored.


Graphs – In JavaFx jargon there is a scene graph. Mark the word graph. You would expect that one node could potentially appear at multiple places in the scene. The documentation however scales the idea of a graph somewhat down to: ‘A scene graph is a set of tree data structures where every item has zero or one parent, and each item is either a “leaf” with zero sub-items or a “branch” with zero or mor sub-items. ‘ Why don’t they call it a tree then ?

Bias – Another occasion where this happens is with the word ‘content-bias’. What the hell dudes ? If you wonder what this is: it is the orientation of the layout: horizontal or vertical ? Why not call it orientation or direction ? Because they want to feel important ?

Insets – Another nice example of this is the word ‘insets’, which stands for ‘inside offsets’. So if I want padding to a control, I have to set the insets. And it goes on like this: lots of words that are either too broad, or don’t mean what they really mean. With respect to the insets, it is even more confusing because, depending on the component type, you get -fx-border-insets, margins, padding, background-insets and -opaque-insets. I don’t mind to have the ability to tune things, but at least document them in a nice drawing what they all mean.

Percentage – a value 0.5 is a percentage 50%. So if I say: express me something as a percentage then you get values between 0 and 100. Yet, in JavaFx there seems to be some confusion on when the value ranges to 1 and when to 100. So, when you then read the documentation for the BackgroundSize constructor, with a parameter documentation as widthAsPercentage the width is to be interpreted as a percentage; then you would expect the width to be in 0..100 range. Well. No, it is between 0 and 1.

Managed – if you want to decide on the layoutparameters of a node yourself, and thus manage the node you have to set this to ‘off’.  Or bascially, you have to tell the tree that it should not be managing the node. This is a standard input-output confusion, and something that should be avoided in code.

Smooth or non-smooth ?

That is a question, and one they apparently never answered. At various places there are markers to set things smooth, or not. Yet, the render engine might or might not take this into account. To illustrate this. The beatgraphs must be aliased. We want every bar clearly marked, so we do not want smoothing when we draw the beatgraphs. Thus, by creating an Image, which is marked as ‘non-smooth’, and then drawing it onto the GraphicsContext of a canvas… results indeed in a smoothened version (go figure). Then, at other places we actually want the render engine to smoothen something. E.g: when we draw the logo in the about box, we need to downscale a fairly large image to a smaller size. Whatever collection of smooth options we set, the resulting image was never properly smoothed. To that end we actually had to write our own routine. Then there is the problem that Images have a smooth flag, which can only be set if the image is loaded from an Inputstream of sorts. When the Image is created through a WritableImage, then the smooth-flag is always off; without a possibility to actually turn it on. However, don’t mind the smooth flag of the Image. The ImageView has one of its own (why two times the same flag if it is ignored anyway). I probably can go on about this aspect of the engine. Suffice to say that this is extremely badly defined, and the rendering is platform dependent, which makes it a useles flag.

Menubars & Stages

The menubar – apparently a total mess to get that working correctly.  In particular the menubar on icewm has submenus that overlay the menubar itself because javafx substracts the window decoration from the height. More so, the submenu cannot be selected as long as the menubar underneath is still selected. On dual screens: if the application is on the wrong screen, menus simply don’t open.

Which screen ? On linux the interaction with the windowmanager causes even more  problems. When I open the application in fullscreen (through the windowmanager), and then the app opens a dialogbox, suddenly the entire app jumps to the other screen, while the dialog remains at the first screen. It is really quit wacko how javafx is not able to figure out where the screen to use, how large window decorations are and  how to deal with that properly.


Layout – refers to the problem of deciding where to place what on the screen. This is well defined throughout the tree. The inner loop is really a charming piece of software (no irony intended). When a node says it needs a new layout, that message propagates upward in the tree, marking the parent nodes with a ‘branch_dirty’, thereby in the next layout cycle, the various branches that required a new layout are processed quite efficiently.  So that part I like a lot.

Inconsistent interface – The thing I don’t like is that certain properties are shadowed or not part of a node as such. A node in itself has neither a minimum/preferred nor a maximum width. Yet it does have a width. Yet, changing the width of a node depends on the context you use it in. In some classe you have to call setFitWidth, in others you have to bend yourself in corners to actually get the thing  resized (see below), at other places, setting the width with setWidth, will not do a thing, there you need to use setPrefWidth. And so on. It is always interesting to try to figure out how you will actually get a node to size correctly.

Non resizable canvases – Then there is the trick with the non-resizable canvas.  If you dig into the code you find that the class hierarchy contains multiple ‘height’ properties at different levels of the tree; all meaning slightly different things. You also figure out that the ‘resize’ operation of a node is simply not implemented, and will not resize a thing. It is quite remarkable how difficult it is to get a Canvas to rescale properly.

This even extends to the problem of figuring out what the width of a node then actually is. In control nodes (most stylable nodes inherit from this one, except video, image and canvases), you cannot ask the heightProperty to tell you something useful, certainly not when you are drawing the nodes content. There you have to use getWidth, but not if you are in a Canvas, then you cannot simply use that because the width shadows the width of the node. At other places you need to ask for the preferredWidth (prefWidth), which will tell you something more useful.

Porbably, the reason the interface is so inconsistent lies in the growth of JavaFx. Initially, a node only needs to know its size. It does not need any information about its minimum/preferred/maximum sizes. That was left for more advance layout approaches. So, a new layer was slapped on it, with a similar interface, but this time it is presented as high level (bean) properties.  Yet that is too slow, so a cache layer was added to it. Again with similar names. In the end, as an outsider, it is kinda hard to figure out what actually is going on. And even the theoretical possibilities often do not work out.

The always growing layout – In JavaFx there is a mismatch between what most users think would happen and what the program actually does. Nobody even bothers anymore to figure out what the rules now actually are. This is in other words a failure of the software design and is very reminiscent of the ‘good old AWT classes’. If you cannot communicate clearly how it is supposed to work, then how do you expect people to make the fullest use of it ?

One good  example of this is the always growing layout.  The layout mechanism has the possibility to bind the width of 1 element to the width of it sparent. However, often this leads to a loop in which the dialog keeps on growing. Neverthless, on stackoverflow you have often people simply stating: ‘just bind it like this’. They should really try that ‘solution’ in real life applications. Most of the time it will not do what you want.


Performance – The render engine – a very nice step back from object orientation. Objects are used to decide what to render when, yet the render engine itself is dissociated from it. It merely receives a set of commands that it will execute. On 3D accelerated cards this goes very fast. This aspect is worth applauding, allthough not uncommon in the industry. Android, Qt, all went this direction. Yet, in many frameworks, it is difficult to draw things sufficiently fast. Also JavaFx has problem with this.  A special purpose GraphicalContext, which functions as a canvas/bitmap that can be used to create a texture before it goes to the underlying engine.  Typically operations on such bitmaps are slow and the resulting framerate drops dramatically when even a moderate amount of objects are drawn. Compared to android, the JavaFx direct draw pipeline is very slow. When only using image-based components, the performance is acceptable.

Repaints are automatic, in theory – After layout, the content of nodes nmust be rendered. That is a tricky operation. Most routines are somewhere in native code and if you want to draw your own image you might need to use a Canvas. Fair enough, I already talked about this.

However, sometimes an update to a node does not trigger a render cycle. Particularly listviews and their listcels are susceptable to this. And there is no sensible way to demand a repaint. People have been using a trick to force things to render: ask the window to make a snapshot of its content as a 1×1 pixel. That triggers the dirty render flags of all nodes. Honostly: is that the best we can do in 2016 ? Take a screenshot of the entire app so that it will actually render ? Interestingly, that routine itself eventually states

// we need to mark the entire scene as dirty
// because dirty logic is buggy
if (scene != null && scene.impl_peer != null)

Thus instead of solving the underlying problem, the developers worked around it. That does not give me a warm fuzzy feeling.

Did this cause problems ? Of course, otherwise I would not have noticed. In practice,  listview cel updates would effectively screw up an entire path to the root of update requests. Not only was the listcel not updated, its parents weren’t either. To solve that we had to write code that would actually ‘change’ the content of a textfield, before setting it to what we wanted. So in the end you get code like

String currentText=tempo.getText();
if (Objects.equal(currentText,info.tempo))
  if (currentText==null)
    tempo.setText("bla bla bla");

Yes that was ‘entertaining’, and totally useles. The back and forth was necessary to actually get JavaFx render our cells.

Sometimes things just refuse to render – to prohibit a node from drawing outside its area, in particular our beatgraph scroller, we have to set a clipping mask. That is done as follows:

a.rect(0, 0, getWidth(), beatgraphHeight);

Ocasionaly this would not work and no rendering would take place at all. After many hours of digging through code and testing we found that when the beatgraphHeight was a non-integer value, the clip was simply not performed on the Graphics2DContext. What we had to do to get this working is write

a.rect(0, 0, getWidth(), Math.round(beatgraphHeight));

Whether this was solely related to my linux install or not I couldn’t tell. It was nonetheless a bug that I would typically associate with AWT.

When I followed the internals of the rectangle call you see that it transforms all double coordinates to float, which is another issue one could have with JavaFx. Internally it is all float, yet the API presents everything as doubles. It would be a lot faster to keep everything in the same bitlevel. Not only that you would not have typical java-style float to double convertion errors.

Internal images too large – when we drew the beatgraph overview at the top of the app, everything worked fine… Until we added

g.setEffect(prelistening ? gray : null);

In order to render this effect, even the parts it would not render in the end, it sometimes had to allocate an image larger than 16384 pixels. That was because we drew a line at x-position 5000 position, yet because that position was not visible anyway, it didn’t matter, we thought.
Nevertheless, the effect rendering demanded an internal image large enough to store the full range of what we were drawing, to then make everything grey. From a general purpose point of view, this is somewhat expected bcause pixels outside the visible region could potentially affect pixels within the visible region. Yet, it would be useful if they would make some sensible estimates that reflect the reality of a color transformation (that is: stick to the sizes you got). Even in case of blur filters, adding a margin that reflects the size of the blur radius would suffice to properly render the visible region. Also, in this case it was nearly impossible to understand what went on becaue the rendering takes place in its own thread. First we tried to figure out where we allocated such large image; couldn’t find any. Then we tried to set breakpoints; and finally we were lucky that this part of the rendering was actually allocated in non-native code; so we could understand what went on.

Object orientation ?

Internal Datastructures – It is here that I loose my respect for the developers. The thing with data structures is the following: how many times you hear people say: don’t use arrays (even for sensible purposes),  always use ADT’s such as ArrayList, Tree etc.. Yet… time and time again I find that the java developers hate their own data structures. In 1998 I figured that their runtime stack was not really a stack but an array that was copied when it grew too large. The result: if the stack became larger, the program became slower. Now, in JavaFx, exactly the same problem. If the eventqueue of javafx becomes larger, then the program slows down. This makes no sense. At all.

And then there is the problem of the selectionmodel that is simply stored as a straight linked list, without any option to quickly find an element. Of course, when you add something to the list, the listview will automatically try to find the current selection back in the list. And here it is that the performance penalty really hits. If you batch insert 27000 elements, even then for each of those elements, the selection is recovered, meaning that we have an O(n²) performance. That is really bad, because a HashMap would already have helped with finding the currently selected items; or when a batch insert takes place, only recover the current selection after really inserting all elements. In our case, having 27000 songs in a list, with 1 song selected would lead to batch insertion times of about 5 minutes. Yes that is no laughing matter. That is a fatal performance penalty.

Because of such crappy datastructures, you often have to twist the surrounding program in corners. In the end, after we understood what went on, we removed the selection, updated the list, and then recovered the selection ourselves.

The case of the swallowed exceptions – Sometimes you just get an ‘Exception in Application start’. Which is actually not true. It is not in the application start that it takes place. If it were you would be able to write a catch in that start method. No it is an exception somewhere in the event queue handling while the start method has not yet finished (which is pretty much the entire program). To debug that, the easiest way is to go to LauncherImpl and set a breakpoint at the line that tells you there is an exception. Then debug it and when it crashes you can actually see the exception. And hope the debugger is smart enough to find the exception back in the compiled code.

Swallowing exceptions just like that is not only bad framework design, it is an absolute horrible thing to do to users of your framework. Now they can really go onto an egghunt to find that non specicifc ‘Exception in Application start’.

The singular JavaFx Application – only one JavaFx application  can be launched per virtual machine. That’s annoying and breaks any decent OO design. It introduces a global state that really not necessary. It was for instance impossible to create a loader that would download a bpmdj.jar and then jump into the JavaFx launch of the app. Jumping into that launch was not possible because the loader had already instantiated its JavaFX application. The easiest way to solve this was actually bypass the entire LaunchImpl class. Thereby we got rid of the swallowed exceptions and we were able to open multiple application stages without any problem.

One or more listeners – In JavaFx there is a certain ambiguity between having properties that can have multiple listeners and events to which only one listener can be attached. An example: attaching multipled listeners to the hiding of a stage is not possible, which is a bit weird because changes to the width can have multiple listeners.

Things I miss from Android

Event time information – Q: when was an event placed in the queue ? A: Impossible to know And that is annoying if you write a real time application that must know when a key was pressed or a mouse was clicked.

A timed runLater – sometimes you want things to be done a bit later. I find it a waste of resources to spawn a thread for that, have that thread wait 3 seonds and then post a runLater message to the event queue to actually do its work. It would be a lot easier if there would be a Platform.runLater(<runnable>, <delay>) call.


From what I’ve seen the javafx framework is a better designed framework in an early stage. Compared to the AWT many painful hooks have been removed and from various design choices, often the most powerful with the least amount of code has been choosen.

A good example of this are the clipping regions. In JavaFx, each node can be clipped, but by default isn’t. This is useful when nodes animate beyond their parents boundaries. Yet, initially you have the feeling that you do nothing else but set clips. Until you realize that it has the advantage that you can set one clip for an entire branch without requiring each branch to do its own clipping. Efficiency & flexibility in one neat package. And that bascially summarizes JavaFx.

Sometimes the OO design is broken; and I hope this will not become a trend.  At other places we noticed standard growing pains of a platform that is being developed.