We’ve come a long way since the first version of Explain Everything™ was released in August 2011. Believe it or not, the original plan was to create something really simple. An infinite canvas, object animation recording, and integration with multiple services were certainly satisfying accomplishments, but we realized wanted to achieve much more.
Our own drawing algorithm was too simple. Seeing potential behind the concept, we decided to move forward by remixing an algorithm first published by Krzysztof Zabłocki. We didn’t stop there, however. As taking on challenges is written in our genetic code, we decided to come up with something better, faster, and more fluid — something that would visibly enhance our users’ experiences.
Threading the needle
The truth was that “smooth line drawing on iPad” was not widely covered in online publications. Such links would of course seem really helpful, but they also tend to provide basic solutions. We decided to deal with this subject head-on and solve it like master-engineers. After all, we have Computer Science degrees, don’t we? We did have to freshen up our calculus skills, however.
The beauty is in the details. Our app works on two threads. The first one receives touch in a form of a point from the multitouch screen (1. Get touch), evaluates points on a curve (2. Evaluate curve points) and creates a proper set of vertices and triangles composing an image (3. Create image). This “image” is added to images queue (using FIFO method). The second thread then receives, renders, and finally displays elements as a real image using an OpenGL engine (Draw image). The next three sections are going to familiarize you with the first thread. Ready?
Point receiving has to take place in a class which inherits from UIView. Three methods are overwritten here:
First (touchesBegan) and the last (touchesEnded) method generates one information. They inform when the line has started and finished. touchesMoved, however, can gather tens up to billions of actions, depending on the movement length. All three methods generate function which gets coordinates of the point:
q = (qx,qy)
Point is received from the touchscreen after multiplying it by square [−1, 1] × [−1, 1], since it is a standard frame for OpenGL.
Evaluate curve points
Getting touch was just a prelude. Evaluate curve points, on the other hand, is the most crucial aspect of the app, which is responsible for generating points on the curve. One of the first decisions we made was to choose the curve type. In our case this was the cubic B-spline. This allowed us to join a curve composed of several parts in a very smooth, almost liquid manner. This means that the values of the first and second derivation are compatible on the linkage. We wanted to prevent occurence of the sharp edges on the linkages and maintain curvature in order to create the best possible experience.
In just a moment after the first four points are received, the algorithm generates other curve fragments. It takes place after each subsequent point is gained. This essential part, in terms of the drawing fluency, is due to the fact that refreshing and touch receiving could not be faster than 60 Hz. Unfortunately, the possible downside of this method is the fact that if the next fragment is generated by just three subsequent points (Bezier curve), it might produce an impression of the curve not keeping up with the finger during drawing.
Digging deeper; Cubic B-spline
The following formula represents cubic B-spline:
Let us explain it briefly. qi = (qix, qiy) is the current point loaded from the touchscreen. Each particular curve segment should be drawn for the t ∈ [0, 1] argument. We quantify this segment into K equal parts: t0, t1, . . . , tK so that t0 = 0 and tK = 1.
The number of sections is determined on the basis of the distance between the points from the touchscreen qi and qi−1. This is done in order to maintain the constant distance between p(tk−1) and p(tk) points for k = 1,…,K.
Finally, we estimate values for p0,…,pK points, where pk= p(tk) is obtained according to the formula (1) at a tk point. These points are black on the following illustration.
It is necessary to estimate normal vectors, which are perpendicular to the curve at p0,…,pK points. We do this in order to create segments from which the curve will be rendered. From formula (1) we can evaluate the derivation value:
The violet vector on the illustration 2 is a tangent vector to the curve at pk point and it is described as vk= p(tk).
If we know that, it is easy to evaluate the normal vector, nk = `sf (n_x^k,n_y^k )`, from the following formulas:
where determines vector’s length. For your ease, the normal vector is orange on the illustration 2.
As an addition, wK (line thickness) at pk point is estimated on the basis of the distance between qi−1 and qi points. w0 line thickness at p0 point is taken from the previous iteration. Line thickness at pk point, on the other hand, is obtained on the basis of linear interpolation:
In the end, let’s shortly investigate cases in which curve drawing finishes before receiving four points. This happens in several situations, such as:
- a single point received from the touchscreen,
- a segment between the points, when two points were received, and finally
- a Cubic Bezier curve, when three points were received.
We are aware of them and we did all our best to prevent some skips and lags.
In the last part of our drawing method, we use estimated values to generate a set of vertices for OpenGL. As a reminder, we use:
- points – p0, …, pK
- normal vectors – n0, …, nK
- curve thickness – 0,…,wK
We know that the raw image is unattractive for human’s eye. That’s why we went for anti-aliasing effect. To obtain that, we generate two types of vertices with the RGBA color format.
Vertices that are closer to the curve (green Bk and Ck points on illustration 2) have alpha transparency value set at 1. For vertices farther from the curve we set alpha at 0 (blue Ak and Dk points on the illustration). We estimate values for the points according to the following formulas:
where waa indicates the anti-aliasing thickness, which has been determined as a constant. Next, we create triangles combined into a single segment: AkAk+1Bk+1, AkBkBk+1, BkBk+1Ck+1, BkCkCk+1, CkCk+1Dk+1, CkDkDk+1. The list of vertices and triangles from all segments is combined into a single image, which is sent to the queue for the second thread.
The second thread is created using the CADDisplayLink object. The image refreshing function uses the standard OpenGL method – glDrawElementsafter setting the buffers with vertices and triangles. It is essential that the GL_BLEND option with glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) blending function is switched on, to obtain the anti-aliasing effect.
After preparing above formulas, we code them in Objective C. The result can be observed in Explain Everything as the “Ink” drawing style (see diagram 10).
As you can deduce by now, our existing “Ink” algorithm is not the final step. We will continue to iterate and improve it in the future. We strongly believe that there is no end to evolution and hope you will share our adventurous spirit of taking on challenges, learning new things, and inventing novel solutions.
Note: the remaining option that can be used is “Calligraphy”. It is the open-source algorithm by Krzysztof Zabłocki. If you wish to know more, read about it under this link and feel free to use it.