Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

September 21 2017

13:00

An Introduction to ETS Tables in Elixir

When crafting an Elixir program, you often need to share a state. For example, in one of my previous articles I showed how to code a server to perform various calculations and keep the result in memory (and later we've seen how to make this server bullet-proof with the help of supervisors). There is a problem, however: if you have a single process that takes care of the state and many other processes that access it, the performance may be seriously affected. This is simply because the process can serve only one request at a time. 

However, there are ways to overcome this problem, and today we are going to talk about one of them. Meet Erlang Term Storage tables or simply ETS tables, a fast in-memory storage that can host tuples of arbitrary data. As the name implies, these tables were initially introduced in Erlang but, as with any other Erlang module, we can easily use them in Elixir as well.

In this article you will:

  • Learn how to create ETS tables and options available upon creation.
  • Learn how to perform read, write, delete and some other operations.
  • See ETS tables in action.
  • Learn about disk-based ETS tables and how they differ from in-memory tables.
  • See how to convert ETS and DETS back and forth.

All code examples work with both Elixir 1.4 and 1.5, which was recently released.

Introduction to ETS Tables

As I mentioned earlier, ETS tables are in-memory storage that contain tuples of data (called rows). Multiple processes may access the table by its id or a name represented as an atom and perform read, write, delete and other operations. ETS tables are created by a separate process, so if this process is terminated, the table is destroyed. However, there is no automatic garbage collection mechanism, so the table may hang out in the memory for quite some time.

Data in the ETS table are represented by a tuple {:key, value1, value2, valuen}. You can easily look up the data by its key or insert a new row, but by default there can't be two rows with the same key. Key-based operations are very fast, but if for some reason you need to produce a list from an ETS table and, say, perform complex manipulations of the data, that's possible too.

What's more, there are disk-based ETS tables available that store their contents in a file. Of course, they operate slower, but this way you get a simple file storage without any hassle. On top of that, in-memory ETS can be easily converted to disk-based and vice versa.

So, I think it's time to start our journey and see how the ETS tables are created!

Creating an ETS Table

To create an ETS table, employ the new/2 function. As long as we are using an Erlang module, its name should be written as an atom:

Note that until recently you could only create up to 1,400 tables per BEAM instance, but this is not the case anymore—you are only limited to the amount of available memory.

The first argument passed to the new function is the table's name (alias), whereas the second one contains a list of options. The cool_table variable now contains a number that identifies the table in the system:

You may now use this variable to perform subsequent operations to the table (read and write data, for example).

Available Options

Let's talk about the options that you may specify when creating a table. The first (and somewhat strange) thing to note is that by default you cannot use the table's alias in any way, and basically it has no effect. But still, the alias must be passed upon the table's creation.

To be able to access the table by its alias, you must provide a :named_table option like this:

By the way, if you'd like to rename the table, it can be done using the rename/2 function:

Next, as already mentioned, a table cannot contain multiple rows with the same key, and this is dictated by the type. There are four possible table types:

  • :set—that's the default one. It means that you can't have multiple rows with exactly the same keys. The rows are not being re-ordered in any particular manner.
  • :ordered_set—the same as :set, but the rows are ordered by the terms.
  • :bag—multiple rows may have the same key, but the rows still cannot be fully identical.
  • :duplicate_bag—rows can be fully identical.

There is one thing worth mentioning regarding the :ordered_set tables. As Erlang's documentation says, these tables treat keys as equal when they compare equal, not only when they match. What does that mean?

Two terms in Erlang match only if they have the same value and the same type. So integer 1 matches only another integer 1, but not float 1.0 as they have different types. Two terms are compare equal, however, if either they have the same value and type or if both of them are numerics and extend to the same value. This means that 1 and 1.0 are compare equal.

To provide the table's type, simply add an element to the list of options:

Another interesting option that you can pass is :compressed. It means that the data inside the table (but not the keys) will be—guess what—stored in a compact form. Of course, the operations that are executed upon the table will become slower.

Next up, you can control which element in the tuple should be used as the key. By default, the first element (position 1) is used, but this can be changed easily:

Now the second elements in the tuples will be treated as the keys.

The last but not the least option controls the table's access rights. These rights dictate what processes are able to access the table:

  • :public—any process can perform any operation to the table.
  • :protected—the default value. Only the owner process can write to the table, but all the processes can read.
  • :private—only the owner process can access the table.

So, to make a table private, you would write:

Alright, enough talking about options—let's see some common operations that you can perform to the tables!

Write Operations

In order to read something from the table, you first need to write some data there, so let's start with the latter operation. Use the insert/2 function to put data into the table:

You may also pass a list of tuples like this:

Note that if the table has a type of :set and a new key matches an existing one, the old data will be overwritten. Similarly, if a table has a type of :ordered_set and a new key compares equal to the old one, the data will be overwritten, so pay attention to this.

The insert operation (even with multiple tuples at once) is guaranteed to be atomic and isolated, which means that either everything is stored in the table or nothing at all. Also, other processes won't be able to see the intermediate result of the operation. All in all, this is pretty similar to SQL transactions.

If you are concerned about duplicating keys or do not want to overwrite your data by mistake, use the insert_new/2 function instead. It is similar to insert/2 but will never insert duplicating keys and will instead return false. This is the case for the :bag and :duplicate_bag tables as well:

If you provide a list of tuples, each key will be checked, and the operation will be cancelled even if one of the keys is duplicated.

Read Operations

Great, now we have some data in our table—how do we fetch them? The easiest way is to perform lookup by a key:

Remember that for the :ordered_set table, the key should compare equal to the provided value. For all other table types, it should match. Also, if a table is a :bag or an :ordered_bag, the lookup/2 function may return a list with multiple elements:

Instead of fetching a list, you may grab an element in the desired position using the lookup_element/3 function:

In this code, we are getting the row under the key :number and then taking the element in the second position. It also works perfectly with :bag or :duplicate_bag:

If you would like to simply check if some key is present in the table, use member/2, which returns either true or false:

You may also get the first or the last key in a table by using first/1 and last/1 respectively:

On top of that, it is possible to determine the previous or the next key based on the provided one. If such a key cannot be found, :"$end_of_table" will be returned:

Note, however, that the table traversal using functions like first, next, last or prev is not isolated. It means that a process may remove or add more data to the table while you are iterating over it. One way to overcome this issue is by using safe_fixtable/2, which fixes the table and ensures that each element will be fetched only once. The table remains fixed unless the process releases it:

Lastly, if you'd like to find an element in the table and remove it, employ the take/2 function:

Delete Operations

Okay, so now let's say you no longer need the table and wish to get rid of it. Use delete/1 for that:

Of course, you may delete a row (or multiple rows) by its key as well:

To clear out the entire table, utilize delete_all_objects/1:

And, lastly, to find and remove a specific object, use delete_object/2:

Converting the Table

An ETS table can be converted to a list anytime by using the tab2list/1 function:

Remember, however, that fetching the data from the table by the keys is a very fast operation, and you should stick to it if possible.

You may also dump your table to a file using tab2file/2:

Note that the second argument should be a charlist (a single-quoted string).

There are a handful of other operations available that can be applied to the ETS tables, and of course we are not going to discuss them all. I really recommend skimming through the Erlang documentation on ETS to learn more.

Persisting the State With ETS

To summarize the facts that we have learned so far, let's modify a simple program that I have presented in my article about GenServer. This is a module called CalcServer that allows you to perform various calculations by sending requests to the server or fetching the result:

Currently our server doesn't support all mathematical operations, but you may extend it as needed. Also, my other article explains how to convert this module to an application and take advantage of supervisors to take care of the server crashes.

What I'd like to do now is add another feature: the ability to log all the mathematical operations that were performed along with the passed argument. These operations will be stored in an ETS table so that we will be able to fetch it later.

First of all, modify the init function so that a new named private table with a type of :duplicate_bag is created. We are using :duplicate_bag because two identical operations with the same argument may be performed:

Now tweak the handle_cast callback so that it logs the requested operation, prepares a formula, and then performs the actual computation:

Here is the prepare_and_log private function:

We are logging the operation right away (the corresponding function will be presented in a moment). Then return the appropriate function or nil if we don't know how to handle the operation.

As for the log function, we should either support a tuple (containing both the operation's name and the argument) or an atom (containing only the operation's name, for example, :sqrt):

Next, the calculate function, which either returns a proper result or a stop message:

Finally, let's present a new interface function to fetch all the performed operations by their type:

Handle the call:

And perform the actual lookup:

Now test everything:

The result is correct because we have performed two :add operations with the arguments 1 and 2. Of course, you may further extend this program as you see fit. Still, don't abuse ETS tables, and employ them when it is really going to boost the performance—in many cases, using immutables is a better solution.

Disk ETS

Before wrapping up this article, I wanted to say a couple of words about disk-based ETS tables or simply DETS

DETS are pretty similar to ETS: they use tables to store various data in the form of tuples. The difference, as you've guessed, is that they rely on file storage instead of memory and have fewer features. DETS have functions similar to the ones we discussed above, but some operations are performed a bit differently.

To open a table, you need to use either open_file/1 or open_file/2—there is no new/2 function like in the :ets module. Since we don't have any existing table yet, let's stick to open_file/2, which is going to create a new file for us:

The filename is equal to the table's name by default, but this can be changed. The second argument passed to the open_file is the list of options written in the form of tuples. There are a handful of available options like :access or :auto_save. For instance, to change a filename, use the following option:

Note that there is also a :type option that may have one of the following values:

  • :set
  • :bag
  • :duplicate_bag

These types are the same as for the ETS. Note that DETS cannot have a type of :ordered_set.

There is no :named_table option, so you can always use the table's name to access it.

Another thing worth mentioning is that the DETS tables must be properly closed:

If you don't do this, the table will be repaired the next time it is opened.

You perform read and write operations just like you did with ETS:

Bear in mind, though, that DETS are slower than ETS because Elixir will need to access the disk which, of course, takes more time.

Note that you may convert ETS and DETS tables back and forth with ease. For example, let's use to_ets/2 and copy the contents of our DETS table in-memory:

Copy the ETS's contents to DETS using to_dets/2:

To sum up, disk-based ETS is a simple way to store contents in the file, but this module is slightly less powerful than ETS, and the operations are slower as well.

Conclusion

In this article, we have talked about ETS and disk-based ETS tables that allow us to store arbitrary terms in memory and in files respectively. We have seen how to create such tables, what the available types are, how to perform read and write operations, how to destroy tables, and how to convert them to other types. You may find more information about ETS in the Elixir guide and on the Erlang official page.

Once again, don't overuse ETS tables, and try to stick with immutables if possible. In some cases, however, ETS may be a nice performance boost, so knowing about this solution is helpful in any case. 

Hopefully, you've enjoyed this article. As always, thank you for staying with me, and see you really soon!

12:07

Create Interactive Charts Using Plotly.js, Part 4: Bubble and Dot Charts

So far in the series, you have learned how to create line charts and bar charts in Plotly.js. As I mentioned in the introductory tutorial of the series, Plotly.js is not limited to just a few chart types. There are more than 20 different kinds of charts that you can create using the library. Each of these chart types has its own set of customization options as well as attributes that are common to multiple charts.

In this tutorial, you will learn how to create bubble and dot plot charts in Plotly.js. Both these chart types use the same scatter traces that we used while creating line charts. The important difference this time is that the mode attribute will be set to markers.

Creating Bubble Charts in Plotly.js

Bubble charts are used to display three dimensions of data on a chart. The data associated with every entity is plotted using bubbles or disks, where the position of a disk on the x-y axis marks its x-y values, and the area of a disk is used to plot the value of the third data point. Bubble charts are a variation of scatter charts. For this reason, it makes sense to set the type attribute to scatter while creating a bubble chart.

There are a few things that you should remember while creating bubble charts. First, the third value of a given data point is represented by the area of the bubble and not its radius. The radius is actually proportional to the square root of the actual value. Second, you can only use them to plot positive values. This makes sense because the area of a bubble cannot be negative or zero.

Now, let's create our first bubble chart by setting the mode attribute to markers. All the attributes that control the appearance of bubbles in the chart are present under the marker object. The size attribute of the marker object can be used to specify a size for the bubbles as the third data point. This attribute can either be a number or an array of numbers. When creating bubble charts, it would be very rare for all the entities to have the same value for their third data point. So you will generally set the size attribute as an array.

The opacity of all the bubbles can be controlled using the opacity attribute. Just like size, this attribute also accepts values as a number or an array or numbers with values between 0 and 1. In the case of line charts, the default value of opacity for different bubbles or disks was 1. In the case of bubble charts, the default value of opacity becomes 0.7.

Another very useful attribute that you can use to accurately create bubble charts is the sizemode parameter. This attribute determines if the value specified in the size attribute should be considered the area of a bubble or its diameter. The default value for sizemode is diameter. However, if you are creating bubble charts, it makes more sense to set the value of sizemode to area. This way, you won't have to do any calculations to determine the right bubble size for a value yourself. You can use the following code to create a bubble chart using the parameters we just discussed.

Right now, every bubble has the same color. The color of individual bubbles or the whole trace can be changed at once using the color attribute.

Another way to set the color of different bubbles is using color bars. In such cases, the bubble color is determined based on the numeric value specified in the array that you passed to the color attribute of marker object. This color bar can act as the fourth data point when plotting a chart. 

For example, let's say you are plotting a bubble chart that shows the location and area of different parks in a large city. A bubble chart would be perfect in this situation. However, the tree density of each park or the average number of people that visit each park can be different. If the color of each bubble is specified as a numerical value, it will be possible to assign a different color to individual bubbles to plot the value of the fourth data point.

In our example, the color corresponding to different magnitudes of tree density can be specified using the colorscale attribute. When using this attribute, you are required to at least specify a mapping for the lowest (0) as well as highest (1) values. An example of a valid colorscale value is [[0, 'rgb(0, 0, 0)'], [1, 'rgb(0, 255, 0)']]. You also have the option to use a palette name string as a colorscale value. Greys, YlGnBu, Greens, YlOrRd, Bluered, RdBu, Reds, Blues, Picnic, Rainbow, Portland, Jet, Hot, Blackbody, Earth, Electric, and Viridis are all considered valid values.

You can specify the lowest numerical color value to which the lowest color scale value should be mapped using the cmin attribute. Similarly, you can also specify the highest color value to which the highest color scale value should be mapped using the cmax attribute. It is possible to reverse the color mapping by setting the value of the reversescale parameter to true. In this case, the lowest color will be mapped to the highest value, and the highest color will be mapped to the lowest value.

The colorbar object provides a lot of parameters that can be used to control its appearance. The width of the color bar created by Plotly.js can be controlled using the thickness parameter. The thickness is specified in pixels, and its default value is 30. The length of the color bar, excluding the padding on both ends, can be specified using the len attribute. The x and y position of the color bar can be set with the help of the x and y parameters. The position is specified in terms of plot fractions, and its valid value can lie anywhere between -2 and 3. The default value for x is 1.02, and the default value for y is 0.5. The x and y padding of the color bar can be controlled similarly using the xpad and ypad parameters. You can assign a title to the color bar using the title attribute. The position of that title can be specified with respect to the color bar using the titleside parameter. Valid values for this parameter are top, bottom, and right. The default value is top.

The following code will create a bubble chart where the darkness of the green color is mapped to the tree density in that park.

Creating a Dot Plot in Plotly.js

A dot plot is a chart that is used to plot points on a very simple scale. You should keep in mind that dot plots are only suitable for a small amount of data. Plotting a large number of points will make the chart very cluttered. Just like bubble charts, a dot plot also requires you to set the type attribute to scatter and the mode attribute to markers. Since dot plots also use markers to plot a point, all the customization options for them are also available under the marker object.

The following code creates a dot plot that shows the marks obtained by top two students in an exam every year. I have also used some other attributes related to the legend and the x-axis to change the overall appearance of our plot.

We have already covered the different attributes of the marker object in the previous section. This time, I have made some changes in the layout of the chart. Using the title attribute to assign a title to your chart will make it more informative by quickly letting the viewers know what the chart is about. I have also changed the appearance of the x-axis line to make it stand out from other lines. As you can see, the width and color of the ticks can be controlled using the tickcolor and tickwidth attributes.

The position of the legend can be specified using the x and y attributes. Both these numbers accept a value between -2 and 3. The position is specified in terms of fractions. The anchor point for the x and y positions can be specified using the xanchor and yanchor attributes. These two attributes determine the points which should be used as a reference to measure the distance specified by the x and y attributes. In the above example, the 0.5 distance of the x attribute is measured from the center of the legend because xanchor has been set to center.

Final Thoughts

This tutorial showed you how to create bubble charts and dot plots in Plotly.js using the attributes of the same marker object. You also learned how to add extra information to your bubble charts using color bars and set the color scale of those bars according to your own requirements. 

I have tried to cover almost all the marker attributes that you might have to use in your projects. If you need to know more about creating bubble and dot plot charts, you can head to the marker reference section of the Plotly.js website. In the next tutorial, you will learn how to create pie and gauge charts in Plotly.js.

If you have any questions related to this tutorial, feel free to let me know in the comments.

11:00

Kotlin From Scratch: Advanced Functions

Kotlin is a functional language, and that means functions are front and center. The language is packed with features to make coding functions easy and expressive. In this post, you'll learn about extension functions, higher-order functions, closures, and inline functions in Kotlin.

In the previous article, you learned about top-level functions, lambda expressions, member functions, anonymous functions, local functions, infix functions, and finally member functions in Kotlin. In this tutorial, we'll continue to learn more about functions in Kotlin by digging into:

  • extension functions 
  • higher-order functions
  • closures
  • inline functions

1. Extension Functions 

Wouldn't it be nice if the String type in Java had a method to capitalize the first letter in a String—like ucfirst() in PHP? We could call this method upperCaseFirstLetter()

To realize this, you could create a String subclass which extends the String type in Java. But remember that the String class in Java is final—which means that you can't extend it. A possible solution for Kotlin would be to create helper functions or top-level functions, but this might not be ideal because we then couldn't make use of the IDE auto-complete feature to see the list of methods available for the String type. What would be really nice would be to somehow add a function to a class without having to inherit from that class.

Well, Kotlin has us covered with yet another awesome feature: extension functions. These give us the ability to extend a class with new functionality without having to inherit from that class. In other words, we don't need to create a new subtype or alter the original type. 

An extension function is declared outside the class it wants to extend. In other words, it is also a top-level function (if you want a refresher on top-level functions in Kotlin, visit the More Fun With Functions tutorial in this series). 

Along with extension functions, Kotlin also supports extension properties. In this post, we'll discuss extension functions, and we'll wait until a future post to discuss extension properties along with classes in Kotlin. 

Creating an Extension Function

As you can see in the code below, we defined a top-level function as normal for us to declare an extension function. This extension function is inside a package called com.chike.kotlin.strings

To create an extension function, you have to prefix the name of the class that you're extending before the function name. The class name or the type on which the extension is defined is called the receiver type, and the receiver object is the class instance or value on which the extension function is called.

Note that the this keyword inside the function body references the receiver object or instance. 

Calling an Extension Function

After creating your extension function, you'll first need to import the extension function into other packages or files to be used in that file or package. Then, calling the function is just the same as calling any other method of the receiver type class.

In the example above, the receiver type is class String, and the receiver object is "chike". If you're using an IDE such as IntelliJ IDEA that has the IntelliSense feature, you'd see your new extension function suggested among the list of other functions in a String type. 

IntelliJ IDEA intellisense feature

Java Interoperability

Note that behind the scenes, Kotlin will create a static method. This static method's first argument is the receiver object. So it is easy for Java callers to call this static method and then pass the receiver object as an argument. 

For example, if our extension function was declared in a StringUtils.kt file, the Kotlin compiler would create a Java class StringUtilsKt with a static method upperCaseFirstLetter()

This means that Java callers can simply call the method by referencing its generated class, just like for any other static method. 

Remember that this Java interop mechanism is similar to how top-level functions work in Kotlin, as we discussed in the More Fun With Functions post!

Extension Functions vs. Member Functions

Note that extension functions can't override functions already declared in a class or interface—known as member functions (if you want a refresher on member functions in Kotlin, take a look at the previous tutorial in this series). So, if you have defined an extension function with exactly the same function signature—the same function name and same number, types and order of arguments, regardless of return type—the Kotlin compiler won't invoke it. In the process of compilation, when a function is invoked, the Kotlin compiler will first look for a match in the member functions defined in the instance type or in its superclasses. If there is a match, then that member function is the one that is invoked or bound. If there is no match, then the compiler will invoke any extension function of that type. 

So in summary: member functions always win. 

Let's see a practical example.

In the code above, we defined a type called Student with two member functions: printResult() and expel(). We then defined two extension functions that have the same names as the member functions. 

Let's call the printResult() function and see the result. 

As you can see, the function that was invoked or bound was the member function and not the extension function with same function signature (though IntelliJ IDEA would still give you a hint about it).

However, calling the member function expel() and the extension function expel(reason: String) will produce different results because the function signatures are different. 

Member Extension Functions

You'll declare an extension function as a top-level function most of the time, but note that you can also declare them as member functions. 

In the code above, we declared an extension function exFunction() of ClassB type inside another class ClassA. The dispatch receiver is the instance of the class in which the extension is declared, and the instance of the receiver type of the extension method is called the extension receiver. When there is a name conflict or shadowing between the dispatch receiver and the extension receiver, note that the compiler chooses the extension receiver. 

So in the code example above, the extension receiver is an instance of ClassB—so it means the toString() method is of type ClassB when called inside the extension function exFunction(). For us to invoke the toString() method of the dispatch receiver ClassA instead, we need to use a qualified this:

2. Higher-Order Functions 

A higher-order function is just a function that takes another function (or lambda expression) as a parameter, returns a function, or does both. The last() collection function is an example of a higher-order function from the standard library. 

Here we passed a lambda to the last function to serve as a predicate to search within a subset of elements. We'll now dive into creating our own higher-order functions in Kotlin. 

Creating a Higher-Order Function

Looking at the function circleOperation() below, it has two parameters. The first, radius, accepts a double, and the second, op, is a function that accepts a double as input and also returns a double as output—we can say more succinctly that the second parameter is "a function from double to double". 

Observe that the op function parameter types for the function are wrapped in parentheses (), and the output type is separated by an arrow. The function circleOperation() is a typical example of a higher-order function that accepts a function as a parameter.

Invoking a Higher-Order Function

In the invocation of this circleOperation() function, we pass another function, calArea(), to it. (Note that if the method signature of the passed function doesn't match what the higher-order function declares, the function call won't compile.) 

To pass the calArea() function as a parameter to circleOperation(), we need to prefix it with :: and omit the () brackets.

Using higher-order functions wisely can make our code easier to read and more understandable. 

Lambdas and Higher-Order Functions

We can also pass a lambda (or function literal) to a higher-order function directly when invoking the function: 

Remember, for us to avoid naming the argument explicitly, we can use the it argument name auto-generated for us only if the lambda has one argument. (If you want a refresher on lambda in Kotlin, visit the More Fun With Functions tutorial in this series).

Returning a Function

Remember that in addition to accepting a function as a parameter, higher-order functions can also return a function to callers. 

Here the multiplier() function will return a function that applies the given factor to any number passed into it. This returned function is a lambda (or function literal) from double to double (meaning the input param of the returned function is a double type, and the output result is also a double type).  

To test this out, we passed in a factor of two and assigned the returned function to the variable doubler. We can invoke this like a normal function, and whatever value we pass into it will be doubled.

3. Closures

A closure is a function that has access to variables and parameters which are defined in an outer scope. 

In the code above, the lambda passed to the filter() collection function uses the parameter length of the outer function printFilteredNamesByLength(). Note that this parameter is defined outside the scope of the lambda, but that the lambda is still able to access the length. This mechanism is an example of closure in functional programming.

4. Inline Functions

In More Fun With Functions, I mentioned that the Kotlin compiler creates an anonymous class in earlier versions of Java behind the scenes when creating lambda expressions. 

Unfortunately, this mechanism introduces overhead because an anonymous class is created under the hood every time we create a lambda. Also, a lambda that uses the outer function parameter or local variable with a closure adds its own memory allocation overhead because a new object is allocated to the heap with every invocation. 

Comparing Inline Functions With Normal Functions

To prevent these overheads, the Kotlin team provided us with the inline modifier for functions. A higher-order function with the inline modifier will be inlined during code compilation. In other words, the compiler will copy the lambda (or function literal) and also the higher-order function body and paste them at the call site. 

Let's look at a practical example. 

In the code above, we have a higher-order function circleOperation() that doesn't have the inline modifier. Now let's see the Kotlin bytecode generated when we compile and decompile the code, and then compare it with one that has the inline modifier. 

In the generated Java bytecode above, you can see that the compiler called the function circleOperation() inside the main() method.

Let's now specify the higher-order function as inline instead, and also see the bytecode generated.

To make a higher-order function inline, we have to insert the inline modifier before the fun keyword, just like we did in the code above. Let's also check the bytecode generated for this inline function. 

Looking at the generated bytecode for the inline function inside the main() function, you can observe that instead of calling the circleOperation() function, it has now copied the circleOperation() function body including the lambda body and pasted it at its call-site.

With this mechanism, our code has been significantly optimized—no more creation of anonymous classes or extra memory allocations. But be very aware that we'd have a larger bytecode behind the scenes than before. For this reason, it is highly recommended to only inline smaller higher-order functions that accept lambda as parameters. 

Many of the standard library higher-order functions in Kotlin have the inline modifier. For example, if you take a peek at the collection operation functions filter() and first(), you'll see that they have the inline modifier and are also small in size. 

Remember not to inline normal functions which don't accept a lambda as a parameter! They will compile, but there would be no significant performance improvement (IntelliJ IDEA would even give a hint about this).  

The noinline Modifier

If you have more than two lambda parameters in a function, you have the option to decide which lambda not to inline using the noinline modifier on the parameter. This functionality is useful especially for a lambda parameter that will take in a lot of code. In other words, the Kotlin compiler won't copy and paste that lambda where it is called but instead will create an anonymous class behind the scene.  

Here, we inserted the noinline modifier to the second lambda parameter. Note that this modifier is only valid if the function has the inline modifier.

Stack Trace in Inline Functions

Note that when an exception is thrown inside an inline function, the method call stack in the stack trace is different from a normal function without the inline modifier. This is because of the copy and paste mechanism employed by the compiler for inline functions. The cool thing is that IntelliJ IDEA helps us to easily navigate the method-call stack in the stack trace for an inline function. Let's see an example.

In the code above, an exception is thrown deliberately inside the inline function myFunc(). Let's now see the stack trace inside IntelliJ IDEA when the code is run. Looking at the screenshot below, you can see that we are given two navigation options to choose: the Inline function body or the inline function call site. Choosing the former will take us to the point the exception was thrown in the function body, while the latter will take us to the point the method was called.

IntelliJ IDEA stack trace for inline function

If the function was not an inline one, our stack trace would be like the one you might be already familiar with:

IntelliJ IDEA stack trace for normal function

Conclusion

In this tutorial, you learned even more things you can do with functions in Kotlin. We covered:

  • extension functions
  • higher-order functions
  • closures
  • inline functions

In the next tutorial in the Kotlin From Scratch series, we'll dig into object-oriented programming and start learning how classes work in Kotlin. See you soon!

To learn more about the Kotlin language, I recommend visiting the Kotlin documentation. Or check out some of our other Android app development posts here on Envato Tuts+!

  • Android SDK
    Java vs. Kotlin: Should You Be Using Kotlin for Android Development?
    Jessica Thornsby
  • Android SDK
    Introduction to Android Architecture Components
    Tin Megali
  • Android SDK
    How to Use the Google Cloud Vision API in Android Apps
    Ashraff Hathibelagal
  • Android SDK
    What Are Android Instant Apps?
    Jessica Thornsby

September 20 2017

12:00

Create Interactive Charts Using Plotly.js, Part 3: Bar Charts

In our last tutorial, you learned how to create line charts in Plotly.js. Every aspect of line charts, like the data to be plotted and the shape or color of the line connecting the plotted points, can be controlled using a set of attributes. Plotly.js allows you to create bar charts in a similar manner. 

In this tutorial, you will learn how to create different kinds of bar charts using Plotly.js. I will also discuss how to control the appearance of these charts, like the bar color and width, using specific attributes.

Before going further, I would like to mention that you can also create some basic bar charts using Chart.js. If you don't plan on using any of the fancy features of Plotly.js, using a lightweight library makes more sense.

Creating Your First Bar Chart

You can plot a bar chart in Plotly.js by setting the value of the type attribute to bar. The rest of the tasks, like creating a trace object or providing the data to be plotted, are similar to the process of creating line charts. Here is the code you need to create a basic bar chart in Plotly.

Here is the chart created by the above code. The planet density data has been taken from the Planetary Fact Sheet provided by NASA. 

Creating Stacked, Grouped, and Relative Bar Charts

If you need to represent more complex data in the form of a chart, you can use the barmode attribute available in Plotly to do so. This attribute can be set to stack, group, overlay, and relative. It's usually helpful if you want to plot multiple bar traces with related information.

The stack value creates stacked bar charts in which bars representing the subgroups of the entity are placed on top of each other to form a single column. All the bars within a subgroup have different colors that show the individual contribution of that section to the whole entity. The combined length of the stacked bars represents the total size of that entity.

Here is some code that stacks the GDP contribution of the top eight countries together. The Nominal GDP sector composition data has been taken from Wikipedia.

The stacking order for the subgroups depends on the order in which the data was passed to the function.

Setting the barmode to overlay will place the individual bars in subgroups over one another. While this mode can be useful for direct comparisons, you should be careful with it because it will hide shorter bars that were passed earlier in the plot() function. 

The following chart has been created with barmode set to overlay. You can now easily compare the GDP contribution of a single sector across all countries with ease. It is worth noting that all the bars of a single country now start from the bottom. This makes a few observations very easy to spot. For example, it is now very clear that GDP of service sector > GDP of industrial sector > GDP of agricultural sector in all of the top eight countries. 

At the current scale, it is impossible to properly see the contribution of the agricultural sector in the GDP of Germany, United Kingdom, and France. The ability of Plotly.js to zoom in can prove very useful in such cases. You can zoom in as much as you want, and the chart will still be very sharp.

Instead of stacking all the bars of a subgroup on top of each other, you can also place them together to form a group. You will have to set the barmode to group to create grouped charts. Keep in mind that providing too much information in grouped charts can make them hard to read. 

You can control the space between bars from different categories as well as bars within a subgroup using the bargap and bargroupgap attributes respectively. Up to this point, the bars were all drawn in order to plot the actual GDP numbers. However, you can use the barnorm parameter to draw the bars in terms of percentages or fractions. When barnorm is set to percentage or fraction, the total contribution of the main category is calculated, and then a percentage is allotted to individual bars based on the subgroup value.

For example, the service sector contributes about 79.7% to the total GDP of United States. This means that the bar for service sector in case of United states will be drawn up to the 79.7 mark. All these attributes discussed in this paragraph are specified in the layout object. Here is an example of the layout object that specifies all these parameters.

The final barmode value called relative is useful when you are plotting a group of traces with both negative and positive values. In this case, the positive values are drawn above the axis and negative values are drawn below the axis.

Changing the Bar Color, Width, Text, and Other Attributes

It is possible to change the color of bars in a trace using the color attribute nested inside the marker parameter. The color and width of the bar outline can be changed in a similar manner. These options are nested inside the line attribute, which itself is nested inside marker. Each of these options can be supplied either as a single value to specify the color and width of all the bars in a trace at once or as an array to specify a different color or width for individual bars.

Another important attribute that you might find useful is base. It allows you to specify the position where the base of a bar is drawn. It can be helpful in certain situations where you are posting relative values that need to be offset to show the right result. 

Additional information can be provided about a specific bar or plotted point using the text attribute. The position of the text can be specified using inside, outside, auto, and none. When inside is used, the text will be positioned inside the bar near its end. The text itself can be scaled and rotated so that it fits properly inside the bar. When outside is used, the text will be placed outside the bar near its end. In this case, the text will only be scaled. When no value has been set for the hovertext attribute, the value of text is shown inside the hover labels as well. 

You can also use different font families for text lying inside as well as outside the bar using the insidetextfont and outsidetextfont attributes.

Here is an example of using all these attributes to create a chart that plots the relative speed of different vehicles, with a single moving vehicle as a reference.

As you can see, I have passed the base and color values as an array. When vehicles are going in the same direction, they appear to be moving slower, so we need to add the speed of our own vehicle as a base value to get the actual speed. 

When vehicles are going in the opposite direction, they appear to be moving fast, so we need to subtract the speed of our own vehicle to get an accurate result. This allows us to show additional information about the speed of our own vehicle on the same plot. The following CodePen example shows the final chart plotted by the above code.

Conclusion

This tutorial showed you how to create a variety of bar charts in Plotly.js using the barmode attribute. You also learned how to customize the appearance of individual bars in a chart using a specific set of attributes. I have tried to cover all the commonly used attributes of bar charts in the tutorial. 

However, there are still a few additional attributes that you can read about in the bar trace reference section on Plotly. Some attributes that I have explained in the tutorial are missing from the reference section, but they can be found in the source files of the library if you are interested in learning more about them.

If you have any questions or suggestions related to this tutorial, feel free to let me know in the comments.

09:00

What's New With Git Support in Xcode 9

Final product image What You'll Be Creating

Apple’s latest IDE version, Xcode 9, is being released along with iOS 11 this September. And it's a huge update! Fans of Git for source control management (SCM) are big winners in this upcoming release.

Previous versions of Xcode had support for SCM, but it was never considered a first-class citizen. Xcode's Git support was buggy and lackluster, paling in comparison to specialist tools such as SourceTree, which meant that developers had to jump between tools to be able to do advanced Git repository management tasks. 

All this has changed in Xcode 9. Apple no longer sees SCM as secondary, but instead has opted for tight integration with GitHub, and has collaborated with GitHub closely for Xcode 9. 

The net result is that you can now log in to your GitHub account right from Xcode, in order to browse, navigate and clone your GitHub repos. You can also search for other repositories without having to switch to a browser window. Inversely, if you have a GitHub repo open in your browser, you can click the clone button and have Xcode automatically clone and open up the project. 

As Apple puts it, “Support for Git is now front and center”, and Apple has introduced a new source control navigator, so you don't have to open up a new window that is visually detached from the rest of Xcode. Developers can now view branches, tags, and remotes through a clear and intuitive timeline, they can inspect commits more granularly, and they can see a more elegant and attractive comparison of changes. 

Objectives of This Tutorial

In this tutorial, I'll walk you through the list of new SCM features that were introduced with Xcode 9. We'll look at three major areas:

  • commits
  • branches
  • tags

Assumed Knowledge

To follow along with this tutorial, you should have some previous experience with Xcode and some working knowledge of source control management through Git. 

Get Started With Xcode 9 SCM

Setting Up Your Environment

In order to make use of Xcode's new SCM features, you will need to: 

Once you have downloaded and installed the latest Xcode build, and have a current GitHub account, you are ready to connect your GitHub account to Xcode 9. Launch Xcode and go to Preferences > Accounts. Then, select the + button at the bottom-left to add a new account. 

Connecting your GitHub account from within Xcode

Select GitHub from the modal window, and enter your GitHub credentials. If you are using two-factor authentication, Xcode will prompt you to enter your code. 

Two-Factor Authentication is supported in Xcode 9

Browsing and Searching GitHub Repositories

Let' see how seamlessly you can browse existing public GitHub repositories, right from within Xcode. Open up the Xcode welcome screen if it isn’t already showing by going to Window > Welcome to Xcode

Welcome to Xcode 9

Then select Clone an existing project, which will bring up the list of repositories on GitHub you currently own, work on, or have starred. 

Searching for a GitHub project in Xcode 9

Besides the list that it has presented, you can also search for a public repo (or private repo you have access to) by name or by entering its GitHub URL. For our tutorial, type RealmDo and click Clone, and then select a local directory to clone the project into. 

Checking out a project in Xcode 9

It will then ask you for a directory to check the project out into:

Opening an Xcode project directly from GitHub

Rather than pulling the project repo from Xcode, another really nifty feature is that you can open a GitHub repo in your browser and select Clone or download and you will see a new button labeled Open in Xcode

SCM settings in Xcode 9

That’s right: if you click on it, it will automatically open up the project in Xcode, to be cloned and downloaded. 

Back to our project. Since we are using CocoaPods to manage dependencies, we will need to jump back into the command terminal and run pod install:

Our project should now be set up locally, and your .xcworkspace should be open in Xcode with all the files in your project. 

Xcode SCM Preferences

By navigating to Xcode's preferences and then selecting Source Control, you can change various aspects of its behavior right from the preferences pane, including the Git author's name and email, and interestingly, even update your .gitignore settings as well. 

Associating a New Project With GitHub

On the other hand, if you've created a new Xcode project and haven't created the project on GitHub yet, you can create a new GitHub project directly from Xcode. Right-click on Remotes and then select Create "RealmDo" Remote on GitHub

Setting a remote master in Xcode 9

Xcode will then ask you to give the repository a name and select an account for it to be associated with.

Creating a GitHub Repository from within Xcode 9

Exploring the New SCM Interface

Let's explore each of the sections of the new interface. 

Xcode 9s overview of SCM

  1. The left-hand pane has your familiar list of icons to access properties of the project. The second tab is brand new: it is the Source Control Navigator.
  2. The Commit History pane is the most prominent part of the navigator, showing you a historical list of commits for the project and branch.
  3. The right-hand pane presents you with extra contextual information about the branch and selected commits.
  4. Alongside the contextual branch and commit information, this sub-section on the right-hand side shows you which files have been part of the selected commit.
  5. The left-hand pane gives you a hierarchical view of your project from an SCM perspective, showing you the current branch, any other branches whether local or remote, and tags. It also signals whether there are new commits to be pushed or pulled, as you'll see later on.

The Source Control Navigator

One of the first things we will explore in Xcode 9 is the Source Control Navigator, brand new to Xcode 9. No longer is the source view a second-class citizen—it is now fully baked into Xcode. Select the second icon on the left-pane as indicated by (1) in the screenshot above to open up the Source Control Navigator. 

Commit History

Let’s take a look at the Commit History perspective, which shows the list of commits for the project, including commit author name, date, description as well as the author's avatar from GitHub, making up the project history. 

This is a very clear and beautifully organized perspective. While this example project is rather sparse as far as history goes (though we will change that shortly), this pane is deceptively powerful. 

Beyond being able to easily scroll and get useful historical information, you are also able to intuitively filter and search by revision, message or author. In fact, you can chain or join multiple filters, to search not only by the author but also by revision. You can also search using the Scope bar, by specific date periods, such as the Last 24 HoursLast 7 Days, or Last 30 Days.

Filtering commits in Xcode 9

Selecting a specific commit brings up some extra information on the right-hand pane. Alongside the branch information, you get commit information, the full SHA, author, message, and date. You also see the list of files made during that commit, which is extremely useful in helping you pinpoint the commit which caused a problem during debugging. 

Right-hand pane in Xcode 9

Double-clicking on a commit brings up a list of the files that have changed in that commit on the left-hand pane, and a version comparison will be displayed front-and-center. This is the familiar diff perspective, but much improved this time around. 

Comparing versions of a file for a commit

Selecting the Assistant Editor on the upper right-hand side can access a really great diff view, which allows you to not only view a diff of the file, but also traverse through previous commits easily, and see the commit information as well as the actual file changes. 

Xcode 9s new comparison perspective panes

By right-clicking on a commit, you can also see a menu that allows you to contact the author, copy the commit information, branch from that commit, or even open the commit on GitHub in the browser. 

Contextual menu for commits

You can also create a tag, which is what we are going to do. Select the Tag button on the first commit, and name it release/0.1

Creating a tag

Naming your tags in this format, by setting a parent group (release, beta, etc.), serves an interesting purpose, which we will find out shortly. For now, since we’ve tagged a commit, we'll segue into our next topic: branching. 

Branching

We are currently developing on the master branch, but during normal development workflow we would usually work on a separate branch. We'd make a branch to develop and test a feature, and then merge it back to the master branch when it is working correctly. Let’s create a new branch by right clicking master and selecting Branch from master. Call the new branch readme_changes

Looking at branches in Xcode 9

I want to demonstrate how merging works, so in the new branch, let’s commit our changes by selecting Source Control > Commit. This brings us back to the comparison view, giving us the opportunity to confirm all the changes we've made in this commit, with the ability to compare our changes line by line.

When you've checked over the changes, enter a commit message similar to the following. And be sure to check the Push to remote box.

Comparison of changes for an active commit

When our feature branch is complete, we want to merge it back to the master branch. Right click on the master branch and select the second merge option: Merge from "readme_changes" into "master"

Merging commits into the master

Xcode will then merge and automatically switch you back to the master branch. Switching back to the master branch, you'll notice a small up arrow with the numeral 1 to the right of the current branch. 

Commit count from remote master

This lets us know that we have a commit that needs to be pushed to the remote repository. If the count was 2, we would be two commits ahead of the remote master. 

Inversely, if there are new updates in the master which you haven't pulled yet, Xcode will let you know. Xcode will fetch from the remote and notify you with a down arrow and count that you are behind the remote master. 

Tagging

Earlier on, we created a tag on one of the earlier commits. Observing the primary pane, you can see another commit which has been tagged. The left-hand pane makes it easy to view a concise list of all the tags, below branches. 

Remember the naming convention we used for our tag, release/0.1. We did that for a reason. Creating tags with a hierarchical parent allows Xcode to group tags together. 

Grouping tags together in Xcode 9

In our case, we have two tags that belong to the release parent tag. We can create multiple logical groups for our tags for different reasons, and Xcode makes it easy to organize our tags. 

By selecting a specific tag, our commit history will only show commits that are related to that tag, meaning we have a complete coding manifest of all changes introduced during a release. This will help us narrow down any problematic code specific to that release.

Conclusion

Xcode 9 is a huge release, and one of the biggest new features is that Apple has finally integrated SCM as a first-class citizen of the IDE. This resolves a long-standing gripe many developers have voiced about how Xcode managed collaborative development and versioning. 

In Xcode 9, not only is GitHub seamlessly blended into the platform, but developers are now rewarded with a beautiful code management interface in a new SCM tab. Developers get a definitive view of the code history, commits, merges, tagging, and branching, as well as advanced perspectives for debugging and identifying commits that have introduced bugs. Git lovers and SCM aficionados will truly appreciate Apple's initiative to consolidate a truly powerful Git management tool, all within the IDE.

While you're here, check out some of our other posts on Xcode, Swift, and iOS app development!

  • Swift
    What's New in Swift 4
    Patrick Balestra
  • iOS SDK
    Faster Logins With Password AutoFill in iOS 11
    Patrick Balestra
  • iOS SDK
    Realm Mobile Database for iOS
    Doron Katz
  • Mobile Development
    How to Submit an iOS App to the App Store
    Vardhan Agrawal
  • iOS SDK
    3 Terrible Mistakes of iOS Developers
    Vardhan Agrawal

September 19 2017

16:00

Creating Digital Signatures With Swift

The main purpose of a digital signature is to verify the integrity of some information. For a simple example, let's say you had a file that was transferred over the network and you want to check that the entire file was transferred correctly. In that case, you would use a checksum.

“A checksum is a small-sized datum derived from a block of digital data for the purpose of detecting errors which may have been introduced during its transmission or storage” — Wikipedia

How do we derive that checksum? The best option is to use a hash. A hash function will take a variable amount of data and will output a signature of fixed length. For example, we could publish a file along with its hash online. When someone downloads the file, they can then run the same hash function on their version of the file and compare the result. If the hashes are the same then the copied or downloaded file is the same as the original. 

A hash is also a one-way function. Given the resulting output, there is no computationally feasible way to reverse that hash to reveal what the original input was. SHA, Secure Hash Algorithm, is a well-known standard that refers to a group of hash functions that have this property and certain others, which make them useful for digital signatures.

About SHA

SHA has undergone many iterations since it was first published. The first and second iterations, SHA-0 and SHA-1, are now known to have major weaknesses. They are no longer approved for security implementations: they generally shouldn't be used for applications relying on security. However, the SHA-2 family includes versions called SHA-256 and SHA-512, and these are considered secure. "256" and "512" simply refer to the resulting number of bits produced. For this tutorial, we are going to use SHA-512.

Note: Another older popular hash algorithm was MD5. It was also found to have significant flaws.

Using SHA is great for checking if data was accidentally corrupted, but this doesn't prevent a malicious user from tampering with the data. Given that a hash output is of a fixed size, all an attacker needs to do is figure out which algorithm was used given the output size, alter the data, and recompute the hash. What we need is some secret information added to the mix when hashing the data so that the attacker cannot recompute the hash without knowledge of the secret. This is called a Hash Message Authentication Code (HMAC).

HMAC

HMAC can authenticate a piece of information or message to make sure that it originated from the correct sender and that the information has not been altered. A common scenario is when you are talking to a server with a back-end API for your app. It may be important to authenticate to ensure that only your app is allowed to talk to the API. The API would have access control to a specific resource, such as a /register_user endpoint. The client would need to sign its request to the /register_user endpoint in order to successfully use it.

When signing a request, it is common practice to take selected parts of the request, such as POST parameters and the URL, and join them together into a string. Taking agreed-upon elements and putting them in a particular order is called canonicalization. In HMAC, the joined string is hashed along with the secret key to produce the signature. Instead of calling it a hash, we use the term signature in the same way that a person's signature in real life is used to verify identity or integrity. The signature is added back to the client's request as a request header (usually also named “Signature”). A signature is sometimes called a message digest, but the two terms can be used interchangeably.

Over on the API side, the server repeats the process of joining the strings and creating a signature. If the signatures match, it proves that the app must have possession of the secret. This proves the identity of the app. Since specific parameters of the request were also part of the string to be signed, it also guarantees the integrity of the request. It prevents an attacker from performing a man-in-the-middle attack, for example, and altering the request parameters to their liking.

In this code, the CCHmac function takes a parameter for the type of hash function to be used, along with two byte-strings and their lengths—the message and a secret key. For the best security, use at least a 256-bit (32 byte) key generated from a cryptographically secure random number generator. To verify everything is working correctly on the other side, run the example and then input the secret key and message on this remote server and verify that the output is the same.

You can also add a timestamp header to the request and signing string to make the request more unique. This can help the API weed out replay attacks. For example, the API could drop the request if the timestamp is 10 minutes stale.

While it's good to stick to using SHA versions that are secure, it turns out that many of the vulnerabilities of the insecure SHA versions do not apply to HMAC. For this reason, you may see SHA1 being used in production code. However, from a public relations standpoint, it may look bad if you have to explain why, cryptographically speaking, it is okay to use SHA1 in this context. Many of the weaknesses of SHA1 are due to what are called collision attacks. Code auditors or security researchers may expect your code to be collision resistant, regardless of the context. Also, if you write modular code where you can swap out the signing function for a different one in the future, you might forget to update the insecure hash functions. Therefore, we will still stick to SHA-512 as our algorithm of choice.

The HMAC CPU operations are fast, but one disadvantage is the problem of key exchange. How do we let each other know what the secret key is without it being intercepted? For example, maybe your API will need to dynamically add or remove multiple apps or platforms from a whitelist. In this scenario, apps would be required to register, and the secret would need to be passed to the app upon successful registration. You could send the key over HTTPS and use SSL pinning, but even then there is always a worry that somehow the key is stolen during the exchange. The solution to the problem of key exchange is to generate a key that doesn't ever need to leave the device in the first place. This can be accomplished using Public Key Cryptography, and a very popular and accepted standard is RSA.

RSA

RSA stands for Rivest-Shamir-Adleman (the authors of the cryptosystem). It involves taking advantage of the difficulty of factoring the product of two very large prime numbers. RSA can be used for encryption or authentication, although for this example we are going to be using it just for authentication. RSA generates two keys, a public and a private, which we can accomplish using the SecKeyGeneratePair function. When used for authentication, the private key is used to create the signature, while the public key verifies the signature. Given a public key, it is computationally unfeasible to derive the private key.

The next example demonstrates what Apple and all the popular gaming console companies use when distributing their software. Let's say your company creates and delivers a file periodically that users will drag into the file sharing portion of your app in iTunes. You want to make sure the files you send out are never tampered with before being parsed in the app. Your company will hold onto and guard the private key which it uses to sign the files. In the bundle of the app is a copy of the public key used to verify the file. Given that the private key is never transmitted or included in the app, there is no way for a malicious user to be able to sign their own versions of the files (apart from breaking into the company and stealing the private key).

We will use SecKeyRawSign to sign the file. It would be slow to sign the entire contents of the file using RSA, so the hash of the file is signed instead. Additionally, the data passed to RSA should also be hashed before signing because of some security weaknesses.

In this code, we used the CC_SHA512 function to specify SHA-512 again. (RSA, unlike HMAC, becomes insecure if the underlying hash function is insecure.) We are also using 4096 as the key size, which is set by the kSecAttrKeySizeInBits parameter. 2048 is the minimum recommended size. This is to prevent a powerful network of computer systems cracking the RSA key (by cracking I mean factoring the RSA key—also known as factorization of a public modulus). The RSA group has estimated that 2048-bit keys could become crackable some time before 2030. If you want your data to be safe beyond that time then it's a good idea to choose a higher key size like 4096.

The generated keys are in the form of SecKey objects. An issue with Apple's implementation of SecKey is that it does not include all of the essential information that makes up a public key, so it's not a valid DER-encoded X.509 certificate. Adding the missing information back into the format for an iOS or OS X app, even server-side platforms such as PHP, requires some work and involves working in a format known as ASN.1. Fortunately, this was fixed in iOS 10 with new SecKey functions for generating, exporting, and importing keys. 

The code below shows you the other side of the communication—the class that accepts a public key via SecKeyCreateWithData to verify files using the SecKeyRawVerify function.

You could try this out and verify that it works using a simple test like the following:

There is one downside to RSA—key generation is slow! The time to generate the keys is dependent on the size of the key. On newer devices a 4096 bit key takes only a few seconds, but if you run this code on an iPod Touch 4th generation, it may take about a minute. This is fine if you are just generating the keys a few times on a computer, but what happens when we need to generate keys frequently on a mobile device? We can't just lower the key size because that downgrades the security. 

So what's the solution? Well, Elliptic Curve Cryptography (ECC) is an up-and-coming approach—a new set of algorithms based on elliptic curves over finite fields. ECC keys are much smaller in size and faster to generate than RSA keys. A key of only 256-bits offers a very strong level of security! To take advantage of ECC, we don't need to change a lot of code. We can sign our data using the same SecKeyRawSign function and then adjust the parameters to use Elliptic Curve Digital Signature Algorithm (ECDSA).

Tip: For more RSA implementation ideas, you can check out the SwiftyRSA helper library, which is focused on encryption as well as signing messages.

ECDSA

Imagine the following scenario: a chat app lets users send private messages to each other, but you want to make sure that an adversary has not changed the message on its way to the other user. Let's see how you could secure their communication with cryptography. 

First, each user generates a keypair of public and private keys on their mobile device. Their private keys are stored in memory and never leave the device, while the public keys are transmitted to each other. As before, the private key is used for signing the data being sent out, while the public key is used for verifying. If an attacker were to capture a public key during transit, all that could be done is to verify the integrity of the original message from the sender. An attacker can't alter a message because they don't have the private key needed to reconstruct the signature.

There is another pro to using ECDSA on iOS. We can make use of the fact that currently, elliptic curve keys are the only ones that can be stored in the secure enclave of the device. All other keys are stored in the keychain which encrypts its items to the default storage area of the device. On devices that have one, the secure enclave sits separate from the processor, and key storage is implemented in hardware without direct software access. The secure enclave can store a private key and operate on it to produce output that is sent to your app without ever exposing the actual private key by loading it into memory!

I will add support for creating the ECDSA private key on the secure enclave by adding the kSecAttrTokenIDSecureEnclave option for the kSecAttrTokenID parameter. We can start this example with a User object that will generate a keypair upon initialization.

Next, we will create some helper and example functions. As an example, the class will allow a user to initiate a conversation and send a message. Of course, in your app, you would configure this to include your specific networking setup.

Next, we will do the actual signing and verification. ECDSA, unlike RSA, does not need to be hashed prior to signing. However, if you wanted to have a function where the algorithm can be easily swapped without making many changes, then it's perfectly fine to continue to hash the data before signing.

This verifies the message, as well as the “identify” of a specific user since only that user has possession of their private key. 

This doesn't mean that we're connecting the key with who the user is in real life—the problem of matching a public key to a specific user is another domain. While the solutions are out of the scope of this tutorial, popular secure chat apps such as Signal and Telegram allow users to verify a fingerprint or number via a secondary communication channel. Similarly, Pidgin offers a question and answer scheme whereby you ask a question that only the user should know. These solutions open a whole world of debate on what the best approach should be.

However, our cryptographic solution does verify that the message can only have been sent by someone who is in possession of a specific private key.

Let's run a simple test of our example:

OAuth and SSO

Often when working with third-party services, you will notice other high-level terms used for authentication, such as OAuth and SSO. While this tutorial is about creating a signature, I will briefly explain what the other terms mean.

OAuth is a protocol for authentication and authorization. It acts as an intermediary to use someone's account for third-party services and aims to solve the problem of selectively authorizing access to your data. If you log in to service X via Facebook, a screen asks you, for example, if service X is allowed to access your Facebook photos. It accomplishes this by providing a token without revealing the user's password.

Single sign-on, or SSO, describes the flow where an authenticated user can use their same login credentials to access multiple services. An example of this is how your Gmail account works to log in to YouTube. If you had several different services at your company, you may not want to create separate user accounts for all of the different services.

Conclusion

In this tutorial, you saw how to create signatures using the most popular standards. Now that we have covered all the main concepts, let's recap!

  • Use HMAC when you need speed and are sure that the secret key can be exchanged securely.
  • If the keys have to travel across a network, it's better to use RSA or ECDSA.
  • RSA is still the most popular standard. Its verification step is quite fast. Use RSA if the rest of your team is already familiar with or using the standard.
  • If you need to constantly generate keys on a slow device, however, use ECDSA. While the ECDSA verification is a tad slower than RSA verification, that doesn't compare to the many seconds saved over RSA for key generation.

So that's it for digital signatures in Swift. If you have any questions, feel free to drop me a line in the comments section, and in the meantime check out some of our other tutorials on data security and app development in Swift!

  • iOS SDK
    Securing iOS Data at Rest: Protecting the User's Data
    Collin Stuart
  • iOS SDK
    Securing iOS Data at Rest: The Keychain
    Collin Stuart
  • Swift
    What's New in Swift 4
    Patrick Balestra
  • iOS SDK
    Faster Logins With Password AutoFill in iOS 11
    Patrick Balestra

13:09

Quick Tip: How to Use Atom as a Git GUI

When you use Git as part of your workflow it can be very useful to have easy access to a GUI to help speed up your processes. There are a great many Git GUIs to choose from, but if you use Atom as your code editor you won’t have to go far to find some solid options. 

Thanks the excellent ecosystem of extension packages created by the Atom community, as well as some in-built tools, there is a range of fantastic ways to get a Git GUI up and running right inside your editor. In this quick tip we’ll look at three of these methods.

Note: If you don't have Atom and you'd like to get it, go to: https://atom.io/ 

1. git-control

The git-control package is my personal favorite as it makes working with Git a breeze. If, like me, you’re a bit more of a casual Git user and don’t have all the commands you need 100% memorized, this is the package for you.

The interface for git-control is contained in its own tab, so you can have it fill up the entire workspace or you can split it off to the side, as you’ll see I have in the image above.

You get a full range of easy access buttons to trigger common tasks like commit, pull, push and so on. It automatically detects new files, changed files and deleted files and shows them to you under Workspace in the sidebar on the left. Switching branches is as easy as clicking the desired branch in the same sidebar.

When you perform actions you’ll get a complete readout of everything happening in the log section running along the bottom of the git-control UI, so if something doesn’t work as expected it’s easy to see the error messages that have been returned.

To install this package go to Packages > Settings View > Install Packages/Themes, search for git-control and hit the corresponding Install button when you find it. To open it up once installed go to Packages > git-control > Toggle.

2. Native git Integration

If you’d rather not install any packages at all, another option for you is Atom’s own native Git integration, something you might naturally expect to find in a code editor made by GitHub. To open up the Git management panel move your mouse over to the far right edge of Atom and you’ll see a little semi circle with an arrow on it appear. Click it and you’ll see the interface pictured in the image above, which opens out as a sidebar on the right.

Through this interface you’ll see staged and unstaged changes, and you’ll be able to make commits with associated messages. There are two tabs in the interface, with the left being the one show in the image above, and the right being for GitHub specific integration. I’ve been using Gitlab of late so I can’t tell you too much about the GitHub section, but if you are a GitHub user I’m sure this native integration will be very handy.

3. git-plus

The git-plus package is the GUI for you if already know exactly what you want to do, you just want a convenient, inline way to do it. To install this package search for git-plus via Packages > Settings View > Install Packages/Themes. 

Once installed, to use it go to Packages > Git Plus. Here you’ll see a list populated with quick shortcuts for all the commands you might typically run through a terminal.

So as you can gather, this package isn’t necessarily designed to help you find your way around if you’re new to Git, rather it’s to save you time and prevent you needing to switch applications when running commands you already know well.

Wrapping Up

Those are three ways to get yourself rolling with a GUI for using Git, right inside Atom editor. In my experience this makes working with Git a more pleasant, smooth and efficient experience. I hope you find one of the options here does the same thing for you!

Related Links:

13:00

12 Useful WordPress Plugins for Page Layouts

When it comes to plugins for laying out the pages on your WordPress site, Visual Composer is a hard one to beat. This easy-to-use drag-and-drop page builder with over 200 unique addons will help the developer and the novice alike create just about any layout imaginable.  

But for users looking to modify just one page in a specific way, this powerful page builder is probably overkill. That’s why we’ve scoured CodeCanyon for the most useful WordPress page layout plugins and have come up with these 12 that will meet a variety of needs.

1. Essential Grid WordPress Plugin

Whether you’re looking for just the right grid to display your blog posts, photos, products, testimonials, social media streams, services or whatever else you have in mind, The Essential Grid WordPress Plugin has got you covered.

This multipurpose grid enables you to display any content on your WordPress site in your choice of elegant grid form. First you decide what source you want to use for your grid entry. Then you customise the grid by choosing from three available styles before selecting your required number of columns and rows and setting spacing for the items. From there you can add any number of skins available to customise your look further.

Essential Grid WordPress Plugin

Notable features:

  • over 25 example skins included
  • widely varied content sources possible, including images, YouTube, HTML5 self-hosted video, etc.
  • various animation styles available
  • responsive and mobile optimised
  • and more

Essential Grid WordPress Plugin is an engaging and visually appealing way to show off content on your WordPress page.

2. FlatFolio

FlatFolio is another great option for those looking for an alternative grid layout. Aside from being highly customisable, the plugin has the additional benefit of offering both Carousel and Slider functions as well. You can customise the grid with logos, captions, coloured overlays, titles and subtitles, hover shadows, and more.

FlatFolio

Notable features:

  • preview feature
  • carousel and slider features
  • unlimited item formats
  • various overlay effects
  • and more

For creatives and other users wanting to show off their visual content, FlatFolio is a versatile and easy-to-use layout choice. 

3. Sidebar and Widget Manager

If you’ve ever wished for the freedom to place a widget in the content area of your WordPress site page, the Sidebar and Widget Manager has heard your prayers. The plugin allows you complete control of where you add widgets to your site’s pages by expanding placement possibility beyond the sidebar and footer area to the page content area.

Sidebar and Widget Manager

Notable features:

  • drag-and-drop grid manager
  • vertical or horizontal widget alignment
  • ability to display or hide any widget on any page
  • supports any kind of content
  • and more

Being able to add widgets to any area of your WordPress site page with the Sidebar & Widget Manager plugin is a great way to build your own unique page layout.

4. Content Manager for WordPress

The Content Manager for WordPress plugin is probably the most versatile of the plugins in this list because it allows you to create any kind of layout you desire in three simple steps. Simply add a new page, create your desired layout with the drag-and-drop interface, and then add your content. 

Content Manager for WordPress

Notable features:

  • ability to add unlimited fully editable pages
  • 10 layout colours
  • 30 Google Font options
  • multiple language support
  • and more

User rjhuntington says of Content Manager for WordPress: “Love it! Solves so many irksome WP shortcomings.”

5. Stupid Simple Testimonials

The Stupid Simple Testimonials plugin makes it super easy to add testimonials or quotes to your page layout. Using a simple shortcode that can be inserted in pages, posts and widgets, the plugin features eight unique ways to customise your page layout.

Stupid Simple Testimonials

Notable features:

  • grid layout automatically adjusts to match the total number of testimonials
  • six unique colour schemes provided to match your theme
  • ability to divide your testimonials into categories for easy management
  • ability to edit using the standard WordPress editor
  • and more

The Stupid Simple Testimonials plugin is a straightforward way to alter your page layout and will integrate your testimonials or quotes into your site in a crisp, professional fashion.

6. WordPress Meet the Team Shortcode Plugin

Just as the Stupid Simple Testimonials plugin above is dedicated to adding testimonials to your page layout, the WordPress Meet the Team Shortcode Plugin is dedicated to adding your team members to your page layout in an elegant and professional way. 

The plugin allows you to not only show members of your team but also arrange them by categories or groups as needed. What’s more, the plugin allows you to add and manage individual member information like job titles, locations, social media, etc.

WordPress Meet the Team Shortcode Plugin

Notable features:

  • responsive grid or carousel view
  • unlimited custom fields
  • drag-and-drop ordering
  • supports up to 12 columns
  • highly customizable
  • and more

WordPress Meet the Team Shortcode Plugin is a good choice if introducing your team and their specialities to potential visitors to your site is a priority.

7. JC WooCommerce Multistep Checkout

The JC WooCommerce Multistep Checkout plugin is designed to improve your customer’s checkout experience by replacing WooCommerce’s default checkout layout. The aim of the plugin is to display each step of the checkout process clearly, so that at any given time users know exactly where they are and how far along they are in the process.

JC WooCommerce Multistep Checkout

Notable features:

  • easily control which steps are displayed
  • ability to add custom steps like user authentication
  • customisable text, style and layout features
  • and more

JC WooCommerce Multistep Checkout is a good choice for refining your WooCommerce checkout process.

8. UberMenu

What’s a great site without an equally great menu layout to help your visitors navigate your content seamlessly? UberMenu plugin is designed to facilitate just such easy navigation. The highly customisable and responsive plugin offers seven main menus and several submenu options to suit a wide variety of tastes and needs.

UberMenu

Notable features:

  • easy to add images, descriptions, Google maps, etc.
  • choice of 18 layout variables
  • fully responsive and compatible with mobile and touch-enabled devices
  • extensive user-friendly documentation
  • and more

UberMenu works out of the box with the WordPress Menu System, making it simple to get started and create gorgeous menus quickly and easily.

9. WordPress Events Calendar

If you’re looking for a clean and elegant way to keep your customers, clients and/or followers updated on your public appearances then adding the WordPress Events Calendar plugin to your WordPress page layout might be the right solution for you. 

WordPress Events Calendar

Notable features:

  • ability to add multiple calendars to a page or post
  • upcoming events widget
  • multiple languages support
  • and more

User Applicist praises WordPress Events Calendar for its “excellent functionality and UI”.

10. WordPress Content Boxes Plugin

With the WordPress Content Boxes Plugin, it’s all about boxes. 43 stylishly designed boxes to be precise, any of which users can select to contain and showcase site content, like testimonials, social icons, team members, products, pricing lists, etc.

WordPress Content Boxes Plugin

Notable features:

  • over 1,000 icons
  • highly customisable
  • ability to use multiple content boxes in one page
  • and more

Though one of the newer additions to CodeCanyon, WordPress Content Boxes Plugin with Layout Builder is sure to be a big favourite in the coming months. 

11. Flip Magazine

Flip Magazine is a beautifully conceived magazine layout which enables users to simulate page turning by pressing forward or backward arrows. It will appeal to creatives who want to show off their image gallery in full-width magazine-page format, but will also work equally well for those who want to display their WordPress posts stylishly.

With Flip Magazine, you can build unlimited pages with each page carrying its own settings.

Flip Magazine

Notable features:

  • user-friendly touch optimised design with gestures support
  • all Google Fonts supported
  • unlimited flip magazine pages
  • CSS3 animations for navigation menu
  • and more

Flip Magazine is possible the coolest of the 12 plugins featured here, but don’t take my word for it. Check it out for yourself.

12. Smart Footer System

I don’t know about you, but I’m pretty tired of the same old same old when it comes to footers, so I got pretty excited when I found the Smart Footer System, which made me look at footers in a whole new way.  

Smart Footer allows users to customise their footers in any number of ways, including adding photos and videos and various animations. With over 70 different styles to choose from, this highly customisable plugin will breathe new life into the bottom of your WordPress site pages.

Smart Footer System

Notable features:

  • several animation modes and speeds
  • innovative and powerful admin panel 
  • video background possible
  • highly customisable
  • and more

With the Smart Footer System plugin, you no longer have any excuse for having dull, snooze-inducing footers on your site.

Conclusion

These 12 plugins just scratch the surface of page layout plugins available at Envato Market. So if none of them catch your fancy, there are plenty of other great options there to hold your interest.

And if you want to improve your skills building WordPress sites, check out the ever so useful free WordPress tutorials we have on offer.

12:00

Create Interactive Charts Using Plotly.js, Part 2: Line Charts

In the Getting Started With Plotly.js tutorial of this series, you were presented with a quick start guide that briefly covered all the features, bundles and chart types available in the library. If you have not already read that tutorial, I would suggest that you go over it at least once to get a broad idea of the Plotly.js library.

In this tutorial, you will learn how to create line charts in Plotly. I have also written another series in the past about a lightweight library called Chart.js that can be used to create canvas-based charts in JavaScript. In one tutorial that is titled Getting Started With Chart.js: Line and Bar Charts, I covered the process of creating line charts using Chart.js. People who have read that tutorial will probably remember that you have to set the type attribute to line to create a line chart in Chart.js.

With Plotly.js, you cannot set the type attribute to line to create a line chart. You will have to set the type attribute to scatter and then set the mode attribute to "lines", "lines+markers", or "lines+markers+text". Keeping this in mind, let's get started with the tutorial and create some awesome line charts.

Creating a Basic Line Chart

In this section, we will plot two different scatter traces in the form of line charts using Plotly. The code is very similar to the one we used in the last tutorial to create a line chart using a single trace. Here is the actual code:

Styling the Chart Lines Using Different Attributes

At this point, everything in our chart has been created using the default options. The rest of the sections in the tutorial will show you how to use all the attributes available for the scatter type trace to create custom line charts. The options to change the appearance of a plotted line are all available under the line key.

The color of a line can be specified using the color key. You can also control the width of a line by using the width attribute. The width is specified in pixels, and its default value is 2. 

The shape of the line between different points that are being plotted can be specified using the shape attribute. The shape is linear by default, but you can also set it to spline, vh, hv, hvh, or vhv. When shape is set to linear, there are no bends in the line that connects two consecutive points. In the case of vh, hv, hvh, and vhv, the lines are never drawn at an angle. They are either horizontal or vertical, with a 90-degree bend that can occur at the first point, second point, mid point, or both the points, depending on the specified shape value. 

The last option is to set the shape to spline. In this case, the line actually becomes a curve, with no sharp bends. The smoothness of this curve can be set with the help of another attribute called smoothing. This attribute can accept any value between 0 and 1.3 inclusive. Setting it to zero will result in straight lines like the linear value. Here is some code that uses all these attributes to plot five different lines on a chart:

The following line chart shows the difference between different values of the shape attribute. I have used the name parameter to assign different names to each line so that you can clearly see the shape created by each value.

One more parameter that you can use while drawing the plot lines is the dash parameter. You can set this parameter to a string value to set a dash style for your lines. Valid values for this attribute are: solid, dot, dash, longdash, dashdot, and longdashdot

Another option is to provide the dash length yourself using a list of pixel values like "4px, 4px, 10px". The following demo uses different values of the dash property to create unique line styles.

Setting Fill Colors and Markers

The area under a plotted line in a graph remains transparent by default, but you can fill it with a color of your choice if you wish to. This can be achieved by using the fill parameter. It is set to none by default, but other valid values include tozeroy, tozerox, tonexty, tonextx, toself, and tonext

The value tozeroy will fill all the area starting from the line trace to y=0. Similarly, the value tozerox will fill all the area starting from the line trace to x=0. The tonexty and tonextx values fill all the area between endpoints of the current trace and the trace before it. When no other traces are left, these values will act like tozeroy and tozerox respectively. You can use these two values to create stacked graphs. 

As the name suggests, the value toitself connects the endpoints of the trace to each other, forming a closed shape. The last value, tonext, fills all the available space between two traces only when one trace is completely enclosing the other one.

By default, the area specified by the value of the fill parameter is filled using a semi-transparent variant of the line color, marker color, or marker line color, depending on availability. However, you can specify your own color to fill that area using the fillcolor parameter.

The dots that represent the plotted points on the graph are marked using circles by default. Plotly.js also provides a lot of other marker symbols for you to choose from. There are about 44 different marker symbols, and almost all of them are available in four different versions. You can find a list of all these symbols in the documentation. The markers also have opacity, size, and color parameters that let you control the opacity, size, and color of these symbols. The default value of opacity is 1, and the default value of size is 6.

Here is a little code that uses all the parameters from this section to create line charts with filled areas and non-circular markers:

When filling the area between different traces with a given color, you need to keep in mind that all these parameters fill the trace area based on the order in which the traces are supplied to the plot() function, and not the order in which the traces were declared. 

In our case, traceE is the first trace, and there is no trace before it. This means that the value tonexty effectively becomes tozeroy this time. We have passed traceD after traceE, so all the points between these two traces will be filled with the color of traceD. If we had instead passed traceA first, then the fill area would have extended all the way from the top to y=0. That might not be a desirable outcome in some cases. So you should always keep the order in which the traces are passed in mind.

Conclusion

In this tutorial, you learned how to create line charts using Plotly.js. Different sections of the tutorial discussed a variety of customization options, like setting the shape, color, and width of the lines. 

You also learned about different marker symbols and the fill option provided by Plotly.js that can be used to further customize a line chart. I have only covered some of the major attributes here, but there are a few others, like showlegend and hoverinfo, that you should probably know about. You can read about all these attributes under the scatter reference section on the Plotly website.

In the next tutorial of the series, you will learn how to create bar charts in Plotly. If you have any questions or tips related to this tutorial, feel free to share them in the comments.

September 18 2017

15:00

Use Machine Learning to Recognize Images With IBM Watson

Wouldn't it be great if an Android app could see and understand its surroundings? Can you imagine how much better its user interface could be if it could look at its users and instantly know their ages, genders, and emotions? Well, such an app might seem futuristic, but it's totally doable today.

With the IBM Watson Visual Recognition service, creating mobile apps that can accurately detect and analyze objects in images is easier than ever. In this tutorial, I'll show you how to use it to create a smart Android app that can guess a person's age and gender and identify prominent objects in a photograph.

Prerequisites

To be able to follow this tutorial, you must have:

1. Activating the Visual Recognition Service

Like all Watson services, the Visual Recognition service too must be manually activated before it can be used in an app. So log in to the IBM Bluemix console and navigate to Services > Watson. In the page that opens, press the Create Watson service button.

Watson home screen

From the list of available services shown next, choose Visual Recognition.

Watson services catalog

You can now give a meaningful name to the service and press the Create button.

Visual Recognition service configuration

Once the service is ready, an API key will be generated for it. You can view it by opening the Service credentials tab and pressing the View Credentials button.

Credentials as a JSON document

2. Project Setup

In this tutorial, we'll be using the Watson Java and Android SDKs while interacting with the Visual Recognition service. We'll also be using the Picasso library in order to fetch and display images from the Internet. Therefore, add the following implementation dependencies to your app module's build.gradle file:

To be able to interact with Watson's servers, your app will need the INTERNET permission, so request for it in your project's AndroidManifest.xml file.

Additionally, the app we'll be creating today will need access to the device's camera and external storage media, so you must also request for the CAMERA and WRITE_EXTERNAL_STORAGE permissions.

Lastly, add your Visual Recognition service's API key to the strings.xml file.

3. Initializing a Visual Recognition Client

The Watson Java SDK exposes all the features the Visual Recognition service offers through the VisualRecognition class. Therefore, you must now initialize an instance of it using its constructor, which expects both a version date and the API key as its arguments.

While using the Visual Recognition service, you'll usually want to take pictures with the device's camera. The Watson Android SDK has a CameraHelper class to help you do so. Although you don't have to, I suggest you also initialize an instance of it inside your activity's onCreate() method.

At this point, you have everything you need to start analyzing images with the service.

4. Detecting Objects

The Visual Recognition service can detect a large variety of physical objects. As input, it expects a reasonably well-lit picture whose resolution is at least 224 x 224 pixels. For now, let's use the device camera to take such a picture.

Step 1: Define a Layout

The user must be able to press a button to take the picture, so your activity's layout XML file must have a Button widget. It must also have a TextView widget to list the objects detected.

Optionally, you can throw in an ImageView widget to display the picture.

In the above code we've added an on-click event handler to the Button widget. You can generate a stub for this widget in code it by clicking on the light bulb shown beside it.

Step 2: Take a Picture

You can take a picture by simply calling the CameraHelper object's dispatchTakePictureIntent() method, so add the following code inside the event handler:

The above method uses the device's default camera app to take the picture. That means to gain access to the picture taken, you must override your activity's onActivityResult() method and look for results whose request code is REQUEST_IMAGE_CAPTURE. Here's how you can do that:

Once you find the right result, you can extract the picture from it in the form of a Bitmap object using the getBitmap() method of the CameraHelper class. You can also get the absolute path of the picture using the getFile() method. We'll need both the bitmap and the absolute path, so add the following code next:

If you chose to add the ImageView widget to your layout, you can display the picture now by directly passing the bitmap to its setImageBitmap() method.

Step 3: Classify the Picture

To detect items in the picture, you must pass the image as an input to the classify() method of the VisualRecognition object. Before you do so, however, you must wrap it in a ClassifyImagesOptions object, which can be created using the ClassifyImagesOptions.Builder class.

The return value of the classify() method is a ServiceCall object, which supports both synchronous and asynchronous network requests. For now, let's call its execute() method to make a synchronous request. Of course, because network operations are not allowed on the UI thread, you must remember to do so from a new thread.

The classify() method is built to handle multiple pictures at once. Consequently, its response is a list of classification details. Because we are currently working with a single picture, we just need the first item of the list. Here's how you can get it:

The Visual Recognition service treats each item it has detected as a separate class of type VisualClassifier.VisualClass. By calling the getClasses() method, you can get a list of all the classes.

Each class has, among other details, a name and a confidence score associated with it. The following code shows you how to loop through the list of classes and display the names of only those whose scores are greater than 70% in the TextView widget.

Note that the above code uses the runOnUiThread() method because the contents of the TextView widget can be updated only from the UI thread.

If you run the app now and take a picture, you will be able to see Watson's image classification working.

Image classification result

5. Analyzing Faces

The Visual Recognition service has a dedicated method to process human faces. It can determine the age and gender of a person in any photograph. If the person's famous, it can also name him or her.

Step 1: Define a Layout

Analyzing faces with the Visual Recognition service is not too different from classifying objects. So you are free to reuse the layout you created earlier. However, to introduce you to a few more features the service offers, I'm going to create a new layout, this one with a slightly different functionality.

This time, instead of taking pictures using the camera and passing them to the service, let's directly pass an image URL to it. To allow the user to type in a URL and start the analysis, our layout will need an EditText widget and a Button widget. It will also need a TextView widget to display the results of the analysis.

I suggest you also add an ImageView widget to the layout so that the user can see the image the URL points to.

Step 2: Display the Image

Inside the on-click handler of the Button widget, you can call the getText() method of the EditText widget to determine the image URL the user typed in. Once you know the URL, you can simply pass it to Picasso's load() and into() methods to download and display the image in the ImageView widget.

Step 3: Run Face Analysis

To run face analysis on the URL, you must use the detectFaces() method of the VisualRecognition client. Just like the classify() method, this method too needs a VisualRecognitionOptions object as its input. 

Because you already know how to use the execute() method to make synchronous requests, let's now call the enqueue() method instead, which runs asynchronously and needs a callback. The following code shows you how:

As you can see in the above code, inside the onResponse() method of the callback object, you have access to a DetectedFaces object, which contains a list of face analysis results. Because we used a single image as our input, we'll be needing only the first item of the list. By calling its getFaces() method, you get a list of all the Face objects detected.

Each Face object has a gender and age range associated with it, which can be accessed by calling the getGender() and getAge() methods.

The getGender() method actually returns a Gender object. You must call its own getGender() method to get the gender as a string, which will either be "MALE" or "FEMALE". Similarly, the getAge() method returns an Age object. By calling its getMin() and getMax() methods, you can determine the approximate age of the face in years.

The following code shows you how to loop through the list of Face objects, generate a string containing the genders and ages of all the faces, and display it in the TextView widget:

Here's a sample face analysis result:

Face analysis result

Conclusion

The Watson Visual Recognition service makes it extremely easy for you to create apps that are smart and aware of their surroundings. In this tutorial, you learned how to use it with the Watson Java and Android SDKs to detect and analyze generic objects and faces.

To learn more about the service, you can refer to the official documentation.

And be sure to check out some of our other posts about machine learning here on Envato Tuts+!

  • Android Things
    Android Things and Machine Learning
    Paul Trebilcox-Ruiz
  • Android Things
    Android Things: Creating a Cloud-Connected Doorman
    Paul Trebilcox-Ruiz
  • Android SDK
    How to Use Google Cloud Machine Learning Services for Android
    Ashraff Hathibelagal
  • Android SDK
    Create an Intelligent App With Google Cloud Speech and Natural Language APIs
    Ashraff Hathibelagal
  • Android SDK
    How to Use the Google Cloud Vision API in Android Apps
    Ashraff Hathibelagal
12:00

Create Interactive Charts Using Plotly.js, Part 1: Getting Started

In the series titled Getting Started With Chart.js, you learned how to use Chart.js to easily create responsive canvas-based charts. The series covered seven basic chart types offered by the library. However, you may be required to create more complex charts with additional functionality to make those charts interactive. 

One of the best free-to-use libraries to create a variety of responsive, interactive and functionality-rich charts is Plotly.js. In this series, you will learn how to create different kinds of charts using Plotly.js, including line charts, bar charts, bubble charts, and dot plot charts.

Why Use Plotly.js?

Plotly.js offers a lot of features that make learning about the library worth the effort. It is a high-level declarative library built on top of d3.js and stack.gl. Here is a list of features that make Plotly one of the best JavaScript charting libraries:

  • You can create interactive charts with ease using Plotly.js. Any chart that you create with the library is equipped with features like zooming in, zooming out, panning, auto-scaling, etc. These features are very useful when you want to study charts with a large number of plotted points. All these events are exposed in the API, so you can write custom code to perform your own actions when any of these events are triggered.
  • High performance when plotting a lot of points makes Plotly.js a great choice whenever you have to chart a large amount of data. Since most charts are created using SVG, you get a decent amount of compatibility across browsers and the ability to export high-quality images of any chart. However, drawing a large number of SVG elements in the DOM can adversely affect the performance. The library uses stack.gl to create high-performance 2D and 3D charts. 
  • Any 3D charts that you create are rendered with the help of WebGL to take full advantage of all the power that the GPU has to offer.
  • All the Plotly.js charts are fully customizable. Everything from the colors and labels to grid lines and legends can be customized using a set of JSON attributes. You will learn how to customize different chart types in the next three parts of the series.

Installing Plotly

Before we start using Plotly.js, we need to install it first. There are a lot of different ways to install the library.

You can directly clone the library using the following command and then use the files located in the dist folder.

Another option is to perform the installation using npm by running the following command:

You can also use the Plotly.js CDN and directly link to the library. Generally, you would like to use a compiled and minified file with the latest version of the library. However, you can also link to a specific version of the library in the CDN. Here is an example:

At the time of writing this tutorial, the latest version of the library is 1.28.3. The file size after minifying and compressing the library is 666 kB. The non-minified and uncompressed version has a size of 5.4 MB. As you can see, the long list of features that this library offers come at a price.

Starting from version 1.15, you can choose from different partial bundles, each of which allows you to create specific chart types. There are seven different bundles: basic, cartesian, geo, gl3d, gl2d, mapbox, and finance. You can get the CDN link for these bundles using the following line:

If you only need to draw charts from a single bundle, you can use this method to significantly reduce the file size. Here is some additional information about each of them.

  • basic: This bundle contains the scatter, bar and pie trace modules. The compressed and minified version of this bundle has a size of 215.7 kB.
  • cartesian: This bundle contains the scatter, bar, box, heatmap, histogram, histogram2d, histogram2dcontour, pie, contour and scatterternary trace modules. The compressed and minified version of this bundle has a size of 238.2 kB. 
  • geo: This bundle allows you to create different types of map-related charts in JavaScript. The compressed and minified version of this bundle has a size of 224.1 kB.
  • gl3d: This bundle allows you to create different types of 3D maps using the scatter, scatter3d, surface and mesh3d trace modules. The compressed and minified version of this bundle has a size of 354 kB.
  • gl2d: This bundle contains the scatter, scattergl, pointcloud, heatmapgl, contourgl and parcoords trace modules. It has a size of 362.9 kB after minification and compression.
  • mapbox: This bundle contains the scatter and scattermapbox trace modules. The file size in this case is 328.6 kB. 
  • finance: The finance bundle can be used to create time series, candlestick and other chart types to plot financial data. This module consists of scatter, bar, histogram, pie, ohlc and candlestick trace modules.

Using Plotly to Create a Chart

Once you have decided the charts that you want to create and loaded the appropriate bundle in your webpage, you can start creating your own charts using Plotly.js. The first thing that you need to do is create an empty div element where the graph should be drawn.

Have some data ready that you want to plot on the chart. In this example, I am just using some random numbers to create the chart. Finally, you have to call the plot() function and provide it with all the information like the container div, the data, and the layout options. Here is the code to create a very basic line chart:

All charts in Plotly.js are created declaratively using JSON objects. Every property of the chart, like its color and data, has a corresponding JSON attribute that can be used to fully customize the appearance and behavior of the chart.

The attributes can be broadly divided into two categories. The first one is called traces, which are objects that are used to provide information about a single series of the data to be plotted on the graph. The second category is layout, which provides different attributes that control all the other aspects of the chart like its title or annotations. Different traces are further categorized by the chart type, and the attributes that are available to you to draw the chart will depend on the value of the type attribute.

In the above example, we have created a traceA object that stores the trace type and the data that you want to plot on the chart. The following CodePen demo shows the final result of the above code.

As you can see in the demo, you can zoom in, zoom out, or auto-scale the graph. You can also download the chart as an image. The chart itself looks very professional with its sharp lines.

Layout Attributes to Customize the Charts

In the rest of the tutorials in this series, we will focus on learning about different attributes related to specific chart types like line and bar charts. Before doing that, you should also have some basic knowledge of different layout attributes that control aspects common to all chart types like the font, the title, the x-axis, the y-axis, etc.

You can specify a global font which should be used while creating traces and other layout components like the axes and the title. The options are specified using the font object, and these values are used by default by all the components of the chart. The color, size and family keys are nested inside the font key. You can use them to set the global font color, global font size, and global font-family respectively.

Each chart has a title attribute which can be used to set the title for the current chart. It gives the user some information about what you are plotting on the chart. The font properties for the title can be specified using the titlefont attribute. Just like the global font attribute, the color, size and family keys nested inside the titlefont attribute can be used to control the font-related properties of the title.

You can specify the width and height of a chart in pixels using the width and height keys. You can also control the spacing around the chart as well as the plotting area using different attributes nested under the margin key. All the values are specified in pixels. 

The left margin is specified using the l attribute, the right margin using the r attribute, the top margin using the t attribute, and the bottom margin using the b attribute. The plotting area and the axis lines are very close to each other by default. You can add some space around the plotting area using the pad attribute nested inside the margin key. The padding is specified in pixels, and its default value is zero.

You can choose your own colors for the background of the whole chart as well as the plotting area to match the theme of your website. Both these colors are set to white by default, but you can specify a different value for each of them using the paper_bgcolor and plot_bgcolor keys respectively.

You can also specify the title and different font properties for all the axes in your chart. The font properties are nested inside the axis keys for the respective axes. You also have the ability to independently control the base color for the axis and the color of the font used for its title. 

Sometimes, the points being plotted on a chart don't go all the way down to zero. In such cases, the ticks created by Plotly on an axis also don't extend to zero. However, if you want the ticks to always start from zero, regardless of the range of points being plotted, you can use the rangemode attribute and set its value to tozero.

The following code snippet uses some of the attributes we just discussed to modify the appearance of the chart we created in the previous section.

Conclusion

In this tutorial, you learned about different features of the Plotly.js library. I also covered the installation and usage of the library along with different layout attributes to customize the appearance of the charts according to your needs. 

JavaScript has become one of the de facto languages of working on the web. It’s not without its learning curves, and there are plenty of frameworks and libraries to keep you busy, as well. If you’re looking for additional resources to study or to use in your work, check out what we have available in the Envato marketplace.

In the rest of the series, you will learn about different types of basic charts that can be created using Plotly. I hope you enjoyed the tutorial, and if you have any questions or suggestions, feel free to share them in the comments.

09:30

New Course: Swift Design Patterns

If you're looking for a comprehensive guide to design patterns, our new course is ideal for you. In Swift Design Patterns, you'll get 24 detailed videos totalling 3.5 hours of viewing time, in which you'll learn some common design patterns that you can use over and over in many different languages.

Screenshot from the Swift Design Patterns course

What You’ll Learn

In this course, Envato Tuts+ instructor Derek Jensen will teach you some of the most common design patterns, and he will explain and demonstrate exactly how these patterns can be applied using the Swift programming language. You'll learn the following patterns:

Creational Patterns

  • factory
  • abstract factory
  • singleton
  • prototype
  • builder

Structural Patterns

  • adapter
  • flyweight
  • proxy
  • bridge
  • decorator
  • composite
  • facade

Behavioral Patterns

  • iterator
  • command
  • chain of responsibility
  • mediator
  • memento
  • interpreter
  • observer
  • strategy
  • state

For each pattern, you'll see a complete example in code, and you'll get an explanation of the specific challenge the pattern is solving. By the end, you'll have some powerful new coding tools in your toolkit.

Watch the Introduction

 

Take the Course

You can take our new course straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to this course, but also to our growing library of over 1,000 video courses and industry-leading eBooks on Envato Tuts+. 

Plus you now get unlimited downloads from the huge Envato Elements library of 300,000+ photos and 34,000+ design assets and templates. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.

September 15 2017

12:00

Build a To-Do API With Node, Express, and MongoDB

API stands for Application Programming Interface. APIs allow the creation of an application to access features of an application or service. Building APIs with Node is very easy. Yes, you heard me right!

In this tutorial, you will be building a To-Do API. After you are done here, you can go ahead to build a front end that consumes the API, or you could even make a mobile application. Whichever you prefer, it is completely fine.

Application Setup

To follow this tutorial, you must have Node and NPM installed on your machine.

Mac users can make use of the command below to install Node.

Windows users can hop over to the Node.js download page to download the Node installer.

Ubuntu users can use the commands below:

To show that you have Node installed open your terminal and run node -v. You should get a prompt telling you the version of Node you have installed.

You do not have not install NPM; it comes with Node. To prove that, run npm -v from your terminal and you will see the version you have installed.

Now go and create a directory where you will be working from, and navigate into it.

Initialize npm in the current working directory by running:

The -y flag tells npm to initialize using the default options.

That will create a package.json file for you. It is time to start downloading all the packages you will make use of. NPM makes this hassle free. You need them as dependencies.

To download these packages, run:

The --save flag tells npm to install the packages as dependencies for your application.

When you open your package.json file, you will see what I have below:

Before you start coding, you have to install MongoDB on your machine if you have not done that already. Here is a standard guide to help you in that area. Do not forget to return here when you are done.

Create the Todo Model

Create a folder called server, and inside it create another folder called models. This is where your Todo model will exist. This model will show how your Todo collection should be structured.

  1. You need to require the mongoose package you installed using NPM.
  2. You create a new model by calling the model function of mongoose. The function receives two parameters: the name of the model as a string, and an object which contains the fields of the model. This is saved in a variable called Todo.
  3. You create a text field for your to-do. The type here is String, and the minimum length is set at 1. You make it required so that no to-do record can be created without it. The trim option ensures that there are no white spaces when records are saved.
  4. You create a field to save true of false value for each to-do you create. The default value is false.
  5. Another field is created to save when the to-do is completed, and the default value is null.
  6. You export the Todo module so it can be required in another file.

Set Up Mongoose

Inside your server folder, create another folder called db and a file called mongoose.js.

This file will be used to interact with your MongoDB, using Mongoose. Make it look like this.

  1. You require the mongoose package you installed.
  2. Plugs in an ES6-style promise library.
  3. You call the connect method on mongoose, passing in the link to your MongoDB database. (You will set that up soon.)
  4. You export mongoose as a module.

Set Up the Configuration

Time to set up a few configurations for your API. The configuration you will set up here will be for your development and test environment (you will see how to test your API).

  1. You create a variable and store it in the Node environment or the string development.
  2. The if block checks if env equals development. When it does, the port is set to 3000, and the MongoDB URI is set to a particular database collection.
  3. If the first condition evaluates to false and this is true, the port is set to 3000, and a different MongoDB database collection is used.

Create End Points

  1. Requires the configuration file created earlier.
  2. Requires lodash installed with NPM.
  3. Requires express installed with NPM.
  4. Requires bodyParser package.
  5. Requires ObjectId from MongoDB.
  6. Requires the mongoose module you created.
  7. Requires your Todo model.
  8. Sets app to the express module imported.
  9. Sets port to the port of the environment where the application will run or port 3000.
  10. Sets up middleware to make use of bodyparser.

In the next part, you created the post request, passing in the path and a callback function which has request and response as its parameters. 

In the block of code, you set todo to the text that is passed in the body of the request, and then call the save function on the todo. When the todo is saved, you send the HTTP response—this time, the todo. Otherwise, the status code 400 is sent, indicating an error occurred.

You have created HTTP methods covering all parts of your API. Now your API is ready to be tested. You can do test it using Postman. If you do not have Postman installed on your machine already, go ahead and get it from the Postman website.

Start up your node server using node server.js

Open up postman and send an HTTP POST request. The specified URL should be http://locahost:3000/todos.

For the body, you can use this:

And you should get a response. Go ahead and play around with it.

Conclusion

In this tutorial, you learned how to build an API using Node. You made use of a lot of resources to make the API a powerful one. You implemented the necessary HTTP methods that are needed for CRUD operation.

JavaScript has become one of the de facto languages of working on the web. It’s not without its learning curves, and there are plenty of frameworks and libraries to keep you busy, as well. If you’re looking for additional resources to study or to use in your work, check out what we have available in the Envato marketplace.

As you continue to build in Node, you will get to understand its power and the reason why it is used worldwide.

September 14 2017

14:00

Code a Real-Time App With NativeScript: Push Notifications

NativeScript is a framework for building cross-platform native mobile apps using XML, CSS, and JavaScript. In this series, we're trying out some of the cool things you can do with a NativeScript app: geolocation and Google Maps integration, SQLite database, Firebase integration, and push notifications. Along the way, we're building a fitness app with real-time capabilities that will use each of these features.

In this tutorial, you'll learn how easy it is to add push notifications to your NativeScript app with the Firebase Cloud Messaging Service.

What You'll Be Creating

Picking up from the previous tutorial, you'll be adding push notifications to the app. A notification will be triggered when the user breaks their current record or when one of their friends takes first place away from them.

Setting Up the Project

If you have followed the previous tutorial on Firebase, you can simply use the same project and build the features that we will be adding in this tutorial. Otherwise, you can create a new project and copy the starter files into your project's app folder.

After that, you also need to install the geolocation, Google Maps, SQLite and Firebase plugins:

Once installed, you need to configure the Google Maps plugin. You can read the complete instructions on how to do this by reading the section on Installing the Google Maps Plugin in the earlier tutorial.

Next, install the fecha library for formatting dates:

After that, you also need to configure the Firebase plugin. Be sure to read the following sections in the previous tutorial so you can get the app running:

  • Running the Project
  • Setting Up a Firebase App
  • Setting Up a Facebook App
  • Installing the Firebase Plugin
  • Configuring Facebook Integration

Since we've already set up the Firebase plugin in the previous post, there's only a little work that needs to be done to set up push notifications.

First, you have to reconfigure the plugin by going inside the node_modules/nativescript-plugin-firebase directory and running npm run config. This time, select both Facebook authentication and Messaging.

Once that's done, open the firebase.nativescript.json file in the root directory of your project, and make sure that messaging is set to true:

Next, open app/App_Resources/Android/AndroidManifest.xml and add the following services inside the <application>. This enables the Firebase messaging service for the app:

Running the Project

You can run the project by executing tns run android. But since this app will build on the geolocation functionality, I recommend that you use a GPS emulator for quickly setting and changing your location. You can read about how to do so in the section on Running the App in the earlier tutorial.

If you get any build errors, you can remove the platform and rerun the app:

Setting Up Firebase Cloud Functions

You'll be using Firebase Cloud Functions to create a server that will send out the push notifications. This Firebase feature is used to run back-end code whenever a specific event happens within Firebase features that you're using—for example, if there's a new data saved in the real-time database, or when there's a newly added user via the Firebase auth service. For this app, you'll be using HTTP Triggers to send push notifications when the mobile app makes a request to a specific endpoint.

To use Firebase Cloud Functions, you first need to install the firebase-tools package globally:

Next, create a new folder that will house the server code. This should be outside your app folder. Inside that folder, install the firebase-functions package:

Once it's installed, log in to Firebase by running firebase login. This opens a new browser tab that allows you to log in with your Google account. Go through the whole process and agree to all permissions being asked.

Once you're logged in, you can now initialize Firebase functions for a specific Firebase project:

This will ask you whether you want to set up a default project or not. Select the Firebase project that you created in the previous tutorial:

setup firebase functions

Next, you will be asked if you want the dependencies installed. Say yes.

Once the dependencies have all been installed, you should see a firebase.json file and a functions folder inside the directory. The file that you'll be working on is the functions/index.js file. Open that file and you'll see the following:

Uncomment the helloWorld function, and you'll get to see HTTP triggers in action.

Run the following to deploy the function to the cloud:

Once the deployment is complete, it should show you the URL where the function has been deployed:

deploy firebase functions

Access that URL from your browser to see the output "Hello from Firebase!"

Adding the Server Code

Now you're ready to add the code for implementing push notifications. First, you'll add the code for the server component, then the code for the app.

Open the functions/index.js file and empty its contents. 

Creating the Firebase Function

Import the Firebase packages that you'll need:

Create the init_push function. Note that the HTTP trigger is called for any request method, so you have to filter for the request method that you want to process. In this case, we only want to process POST requests. We expect the app to submit the id, steps, and friend_ids as the request data. 

Getting the User and Friends Data

Next, query the Firebase database to check if the user ID exists. This serves as a way to secure the endpoint so not just anyone can trigger push notifications. (Of course, a real app should have much better back-end security, so that users can't spoof their own data or the data of somebody else.) 

If the user does exist, query the database again so it returns all the users. Note that Firebase does not currently provide a way to return records based on an array of IDs, so we'll have to filter the relevant data ourselves:

Next, loop through the results returned from Firebase and create a new array that houses the friends_data. Once this is done, sort the array according to the number of steps by each user. The one with the highest number of steps has the first index.

Construct the Notification Payload

Now we're ready to determine who will receive the notification and construct the notification payload. Who is in first place? Is it the current user or one of the user's friends? Since the current user will also have broken their own record when they break the overall record of whoever's in first place, we just need to check if that record has been broken.

Sending the Notification

Finally, send out the notification:

Updating the App Code

Earlier, you set up the app so that it was able to receive push notifications. This time, you'll be adding code so that your app can process those push notifications and display them to the user. 

Receiving Push Notifications

The first thing that you need to do in order to receive push notifications is to update the firebase.init() function to include a listener for receiving the device token:

This function only executes once, so you have to save the token locally using application settings. Later on, this will allow us to get the device token when the user logs in for the first time. If you still remember from the previous tutorial, we're saving the user's data to Firebase the first time they log in.

Next, you can add the listener for when notifications are received. This will display an alert box which uses the title and body of the message as the content:

Saving the Device Token to Firebase

Firebase Cloud Messaging requires the device token when sending out a push notification to a specific device. Since we're already using Firebase, we'll just save the device token along with the user data. For that, you need to edit the code for saving the user's data to include the device token that we got earlier:

Triggering Push Notifications

Push Notifications are triggered when one of two things happens:

  • when the user breaks their current record
  • when one of the user's friends breaks their record and goes to first place

The first one is easy, so there's really no need for additional setup. But for the second one, you need to do a little work. First, you have to edit the code for when the auth state changes. Right after extracting the friend IDs from the Facebook result, you have to save the friend IDs using application settings.

Next, update the code for when the user stops tracking their walk. Right after the code for constructing the user data for updating the user, get the friend IDs from application settings and include it in the object which contains the request data for triggering the push notification.

Make the request to the Firebase Cloud Functions endpoint that you created earlier. Once a success response is returned, only then will the user's data be updated on the Firebase database. 

Testing Push Notifications

You can test the sending of push notifications by first uninstalling the app from the emulator or device. This allows us to properly trigger the function for getting the device token. Be sure to add console.log to output the device token:

When the device token is outputted in the NativeScript console, copy it, click on the Database menu on your Firebase app dashboard, and add it as a device token to all the users of the app. Use device_token as the property name.

To trigger the push notification, you can use curl to make a POST request to the Firebase Function endpoint:

If you don't have curl installed, you can use the Postman App to send the request. Use the following settings for the request:

  • Request method: POST
  • URL: Your Firebase function endpoint
  • Headers Key: Content-type
  • Headers Value: application/json
  • Body: 

Once triggered, you'll see an output similar to the following:

push notification received

If the app isn't currently open, you'll see the notification in the notification area:

push notification outside app

Conclusion

Congratulations! You've finally completed the fitness app. Over the course of four tutorials, you've built a NativeScript app which uses Google maps, SQLite, Firebase Realtime database, and Firebase Cloud Messaging. Now you have a pretty good foundation for building NativeScript apps which use those technologies.

To learn more about NativeScript or other cross-platform mobile technologies, be sure to check out some of our other courses and tutorials here on Envato Tuts+!

  • NativeScript
    Code a Mobile App With NativeScript
    Keyvan Kasaei
  • NativeScript
    Get Started With NativeScript and Mobile Angular 2
    Reginald Dawson
  • React Native
    Get Started With React Native
    Markus Mühlberger
  • React Native
    Build a Social App With React Native
    Markus Mühlberger

12:00

Getting Started With End-to-End Testing in Angular Using Protractor

Final product image What You'll Be Creating

Protractor is a popular end-to-end test framework that lets you test your Angular application on a real browser simulating the browser interactions just the way that a real user would interact with it. End-to-end tests are designed to ensure that the application behaves as expected from a user's perspective. Moreover, the tests are not concerned about the actual code implementation.

Protractor runs on top of the popular Selenium WebDriver, which is an API for browser automation and testing. In addition to the features provided by Selenium WebDriver, Protractor offers locators and methods for capturing the UI components of the Angular application. 

In this tutorial, you will learn about:

  • setting up, configuring and running Protractor 
  • writing basic tests for Protractor
  • page objects and why you should use them
  • guidelines to be considered while writing tests
  • writing E2E tests for an application from start to finish

Doesn't that sound exciting? However, first things first.

Do I Need to Use Protractor?

If you've been using Angular-CLI, you might know that by default, it comes shipped with two frameworks for testing. They are:

  • unit tests using Jasmine and Karma
  • end-to-end tests using Protractor

The apparent difference between the two is that the former is used to test the logic of the components and services, while the latter is used to ensure that the high-level functionality (which involves the UI elements) of the application works as expected. 

If you are new to testing in Angular, I'd recommend reading the Testing Components in Angular Using Jasmine series to get a better idea of where to draw the line. 

In the former's case, you can leverage the power of Angular testing utilities and Jasmine to write not just unit tests for components and services, but basic UI tests also. However, if you need to test the front-end functionality of your application from start to end, Protractor is the way to go. Protractor's API combined with design patterns such as page objects make it easier to write tests that are more readable. Here's an example to get things rolling.

Configuring Protractor

Setting up Protractor is easy if you are using Angular-CLI to generate your project. The directory structure created by ng new is as follows.

The default project template created by Protractor depends on two files to run the tests: the spec files that reside inside the e2e directory and the configuration file (protractor.conf.js). Let's see how configurable protractor.conf.js is:

If you are ok with running the test on Chrome web browser, you can leave this as is and skip the rest of this section.

Setting Up Protractor With Selenium Standalone Server

The directConnect: true lets Protractor connect directly to the browser drivers. However, at the moment of writing this tutorial, Chrome is the only supported browser. If you need multi-browser support or run a browser other than Chrome, you will have to set up Selenium standalone server. The steps are as follows.

Install Protractor globally using npm:

This installs the command-line tool for webdriver-manager along with that of protractor. Now update the webdriver-manager to use the latest binaries, and then start the Selenium standalone server.

Finally, set the directConnect: false and add the seleniumAddress property as follows:

The config file on GitHub provides more information about the configuration options available on Protractor. I will be using the default options for this tutorial.

Running the Tests 

ng e2e is the only command you need to start running the tests if you are using Angular-CLI. If the tests appear to be slow, it's because Angular has to compile the code every time you run ng e2e. If you want to speed it up a bit, here's what you should do. Serve the application using ng serve.

Then fire up a new console tab and run:

The tests should load faster now.

Our Goal

We will be writing E2E tests for a basic Pastebin application. Clone the project from the GitHub repo.

Both the versions, the starter version (the one without the tests) and the final version (the one with the tests), are available on separate branches. Clone the starter branch for now. Optionally, serve the project and go through the code to get acquainted with the application at hand.

Let's describe our Pastebin application briefly. The application will initially load a list of pastes (retrieved from a mock server) into a table. Each row in the table will have a View Paste button which, when clicked on, opens up a bootstrap modal window. The modal window displays the paste data with options to edit and delete the paste. Towards the end of the table, there is a Create Paste button which can be used to add new pastes.

End-to-End testing using protractor Sample pastebin application The sample application. 

The rest of the tutorial is dedicated to writing Protractor tests in Angular.

Protractor Basics

The spec file, ending with .e2e-spec.ts, will host the actual tests for our application. We will be placing all the test specs inside the e2e directory since that's the place we've configured Protractor to look for the specs.

There are two things you need to consider while writing Protractor tests:

  • Jasmine Syntax
  • Protractor API

Jasmine Syntax

Create a new file called test.e2e-spec.ts with the following code to get started. 

This depicts how our tests will be organized inside the spec file using Jasmine's syntax. describe(), beforeEach() and it() are global Jasmine functions. 

Jasmine has a great syntax for writing tests, and it works just as well with Protractor. If you are new to Jasmine, I would recommend going through Jasmine's GitHub page first.

The describe block is used to divide the tests into logical test suites. Each describe block (or test suite) can have multiple it blocks (or test specs). The actual tests are defined inside the test specs. 

"Why should I structure my tests this way?" you may ask. A test suite can be used to logically describe a particular feature of your application. For instance, all the specs concerned with the Pastebin component should ideally be covered inside a describe block titled Pastebin Page. Although this may result in tests that are redundant, your tests will be more readable and maintainable. 

A describe block can have a beforeEach() method which will be executed once, before each spec in that block. So, if you need the browser to navigate to a URL before each test, placing the code for navigation inside beforeEach() is the right thing to do.

Expect statements, which accept a value, are chained with some matcher functions. Both the real and the expected values are compared, and a boolean is returned which determines whether the test fails or not.

Protractor API

Now, let's put some flesh on it. 

browser.get('/') and element(by.css('.pastebin')).getText() are part of the Protractor API. Let's get our hands dirty and jump right into what Protractor has to offer.

The prominent components exported by Protractor API are listed below.

  1. browser(): You should call browser() for all the browser-level operations such as navigation, debugging, etc. 
  2. element(): This is used to look up an element in the DOM based on a search condition or a chain of conditions. It returns an ElementFinder object, and you can perform actions such as getText() or click() on them.
  3. element.all(): This is used to look for an array of elements that match some chain of conditions. It returns an ElementArrayFinder object. All the actions that can be performed on ElementFinder can be performed on ElementArrayFinder also.
  4. locators: Locators provide methods for finding an element in an Angular application. 

Since we will be using locators very often, here are some of the commonly used locators.

  • by.css('selector-name'): This is by far the commonly used locator for finding an element based on the name of the CSS selector.
  • by.name('name-value'): Locates an element with a matching value for the name attribute.
  • by.buttonText('button-value'): Locates a button element or an array of button elements based on the inner text.  

Note: The locators by.model, by.binding and by.repeater do not work with Angular 2+ applications at the time of writing this tutorial. Use the CSS- based locators instead.

Let's write more tests for our Pastebin application.

The code above works, and you can verify that yourself. However, wouldn't you feel more comfortable writing tests without the Protractor-specific vocabulary in your spec file? Here's what I am talking about:

The specs appear more straightforward without the extra Protractor baggage. How did I do that? Let me introduce you to Page Objects.

Page Objects

Page Object is a design pattern which is popular in the test automation circles. A page object models a page or part of an application using an object-oriented class. All the objects (that are relevant to our tests) like text, headings, tables, buttons, and links can be captured in a page object. We can then import these page objects into the spec file and invoke their methods. This reduces code duplication and makes maintenance of code easier.

Create a directory named page-objects and add a new file inside it called pastebin.po.ts. All the objects concerned with the Pastebin component will be captured here. As previously mentioned, we divided the whole app into three different components, and each component will have a page object dedicated to it. The naming scheme .po.ts is purely conventional, and you can name it anything you want.

Here is a blueprint of the page we are testing.

End-to-End Testing in Angular  Blueprint for the Pastebin component

Here is the code.

pastebin.po.ts

Let's go over what we've learned thus far. Protractor's API returns objects, and we've encountered three types of objects thus far. They are:

  • promise.Promise
  • ElementFinder
  • ElementArrayFinder

In short, element() returns an ElementFinder, and element().all returns an ElementArrayFinder. You can use the locators (by.css, by.tagName, etc.) to find the location of the element in the DOM and pass it to element() or element.all()

ElementFinder and ElementArrayFinder can then be chained with actions, such as isPresent(), getText(), click(), etc. These methods return a promise that gets resolved when that particular action has been completed. 

The reason why we don't have a chain of then()s in our test is because Protractor takes care of it internally. The tests appear to be synchronous even though they are not; therefore, the end result is a linear coding experience. However, I recommend using async/await syntax to ensure that the code is future proof.

You can chain multiple ElementFinder objects, as shown below. This is particularly helpful if the DOM has multiple selectors of the same name and we need to capture the right one.

Now that we have the code for the page object ready, let's import it into our spec. Here's the code for our initial tests.

Organizing Tests and Refactoring

Tests should be organized in such a way that the overall structure appears meaningful and straightforward. Here are some opinionated guidelines that you should keep in mind while organizing E2E tests.

  • Separate E2E tests from unit tests.
  • Group your E2E tests sensibly. Organize your tests in a way that matches the structure of your project.
  • If there are multiple pages, page objects should have a separate directory of their own.
  • If the page objects have some methods in common (such as navigateToHome()), create a base page object. Other page models can inherit from the base page model. 
  • Make your tests independent from each other. You don't want all your tests to fail because of a minor change in the UI, do you?
  • Keep the page object definitions free of assertions/expectations. Assertions should be made inside the spec file.

Following the guidelines above, here's what the page object hierarchy and the file organization should look like. 

Page object hierarchy and e2e test structure in Protractor

We've already covered pastebin.po.ts and mainPage.e2e-spec.ts. Here are the rest of the files.

Base Page Object

Add Paste Page Object

End-to-End Testing in Angular  Blueprint for the AddPaste component Blueprint for the AddPaste component

Add Paste Spec File

Exercises

There are a couple of things missing, though: the tests for the View Paste button and the modal window that pops up after clicking the button. I am going to leave this as an exercise for you. However, I will drop you a hint. 

The structure of the page objects and the specs for the ViewPastePage are similar to that of the AddPastePage. 

End-to-End Testing in Angular  Blueprint for ViewPaste component Blueprint for the ViewPaste component

Here are the scenarios that you need to test:

  1. ViewPaste Page should have a button, and on click, it should bring up a modal window.
  2. The modal window should display the paste data of the recently added paste.
  3. The modal window should let you update values.
  4. The delete button should work.

Try to stick to the guidelines wherever possible. If you're in doubt, switch to the final branch to see the final draft of the code. 

Wrapping It Up

So there you have it. In this article, we've covered writing end-to-end tests for our Angular application using Protractor. We started off with a discussion about unit tests vs. e2e tests, and then we learned about setting up, configuring and running Protractor. The rest of the tutorial concentrated on writing actual tests for the demo Pastebin application. 

Please let me know your thoughts and experiences about writing tests using Protractor or writing tests for Angular in general. I would love to hear them. Thanks for reading! 

September 13 2017

14:00

What Is the Android Activity Lifecyle?

In my previous post, you learned that Intents let us send messages from one Android component to another. Well, one very important kind of component is an Activity. 

Activities are a fundamental part of Android app development. And it's impossible to understand Activities without also understanding their lifecycles. In this post, you'll learn all about the Activity lifecycle.

  • Android SDK
    What Are Android Intents?
    Chinedu Izuchukwu

Activity Lifecycle

An Activity is a single screen in Android. It is like a window in a desktop app, or a Frame in a Java program. An Activity allows you place all your UI components or widgets together on the screen.

It's important to understand that an Activity has a lifecycle: that is to say that it can be in one of several different states, depending on what is happening with the app and with the user interaction. 

Lifecycle Methods

Let's look more closely at the lifecycle of an Android Activity. Each time the Activity state changes, one of the following lifecycle methods will be called on the Activity class. 

onCreate(): This is called when the Activity is first initialized. You need to implement this method in order to do any initialization specific to your Activity.

onStart(): This is called the first time that the Activity is about to become visible to the user, as the Activity prepares to come to the foreground become interactive. Once this callback finishes, the onResume() method will be called.

onResume(): When the Activity goes into this state, it begins to interacts with the user. The Activity continues in this state till something happen to take focus from the app or Activity (such as an incoming call). When this happens, the onPause() method will be called.

onPause(): This method is used to pause operations that should not happen when the Activity is in paused state. A call to this method indicates that the user is leaving the app. For example, in a music player app, an incoming call will cause the app to transition into a paused state. This should mute or pause the currently playing music. When the user returns to the app, the onResume() method will be called.

onStop(): This method is called when the Activity is no longer visible in the app. It can happen, for example, when another Activity has been loaded and is taking the full screen of the device. When this method is called, the Activity is said to be in a stopped state. In this state, the system either calls the onRestart() to bring back interactivity with Activity. Or it calls the onDestroy() method to destroy the Activity.

onDestroy(): This gets called before the Activity is destroyed. The system calls this method when a user terminates the Activity, or because the system is temporarily destroying the process that contains the Activity to save space. Be sure to free up any resources your Activity has created in this method, or else your app will have a memory leak!

onRestart(): This gets called when an Activity restarts after it had been stopped.

Starting an Activity

Most user interactions with an app cause the active Activity to be changed. So an app transitions between Activities many times during its lifetime.

It's necessary to link Activities together when one Activity needs to start another Activity. To start an Activity, you either use startActivity() or startActivityForResult(). You have to pass an Intent in either case.

Starting an Activity With No Expected Result

startActivity() is used if the newly started Activity does not need to return a result.

The following code snippet shows how to start another Activity using this method:

You can also perform actions such as passing data from one Activity to another. In this case, your current Activity (the calling Activity) wants to pass data a target Activity. This is where Intents come in handy. To learn about using Intents to start an Activity, check out my previous article.

Starting an Activity With a Result

startActivityForResult() is used to start another Activity and expects to get data back from the newly started Activity. In other words, use this when you want to get a result from the target Activity back to the calling Activity, e.g. if the target Activity is collecting some user information in a modal dialog.

You receive the result from the Activity in the onActivityResult(int requestCode, int resultCode, Intent data) method. The result will be returned as an Intent.

Example of Starting an Activity

Here is an example to show how starting an Activity works.

First, you create your MainActivity with your onCreate() method, a layout file, and a request code.

In your onCreate() method, you'll create a new instance of an intent to start your second Activity. 

When you're ready to start that Activity, say in response to a button click, you'll call startActivityForResult(), which will pass the newly created intent and the request code.

Still, in your MainActivity, you need to handle Activity result events. You do this by implementing the onActivityResult()  method. This is how you will receive the result from the other Activity. 

Here's how it should look:

Now go ahead and create your SecondActivity. It should look something like the code below.

Terminating an Activity

Before an Activity terminates, the corresponding lifecycle methods will be called.

The onPause() method should stop all listeners and UI updates. The onStop() method should save the application data. Finally, the onDestroy() method will free up any resources that the Activity has allocated. 

When the user switches back to an app that has been terminated by the system, the onResume() method is called. Based on saved data, it can re-register listeners and trigger UI updates.

Activity Instance State

An Activity needs a way to keep valuable state and user data that it has obtained. These data might be obtained from user input or created while the Activity was not on-screen.

For example, a change of device orientation can cause an Activity to be destroyed and recreated. In such a scenario, you need to make sure to save all Activity state before it is destroyed and reload it again when it is recreated. Otherwise, any data your Activity has at that time can be completely lost.

To save Activity state, you can override the onSaveInstanceState() method. This method is passed a Bundle object as a parameter. A bundle can contain strings, primitive data types, or objects. In this method, simply add any important state data to the bundle. This bundle will be returned to the Activity later so you can restore the Activity state.

To extract the saved state from the bundle and restore it, implement the onRestoreInstanceState() method. This callback is invoked between the onStart() and the onResume() lifecycle methods.

We will look deeper into Activity instance state in a future article.

Conclusion

After following this post, you'll have a good understanding of how an Activity lifecycle works. And you've learned that there are two ways to start an Activity, as well as getting some pointers to how instance state is handled in the Activity lifecycle.

Thanks for reading, and while you're here, check out some of our other posts on coding Android apps.

  • Android
    Android From Scratch: An Overview of Android Application Development
    Ashraff Hathibelagal
  • Android SDK
    How to Monetize Your Android Apps With AdMob
    Chike Mgbemena
  • Kotlin
    Kotlin From Scratch: Variables, Basic Types, and Arrays
    Chike Mgbemena

13:00

Object-Oriented Programming With JavaScript

Final product image What You'll Be Creating

Are you familiar with the term "spaghetti code"? This is a metaphor you might hear from non-JavaScript developers in criticism of the language. It is code with no structure. It will consist of line after line of statements. Some may be wrapped in functions, some not at all. And if you are lucky, all 9,000 lines of code will be in one file. This "spaghetti" structure is probably the result of procedural programming.

In procedural programming, functions are used to perform tasks. We need functions, but we also need a design we can work with. And while spaghetti is great for dinner, it’s not for code. The antidote is object-oriented programming. To understand object-oriented programming, we will cover designing a program, defining classes, and creating objects.

Designing a Program

Let’s imagine you have been assigned the task to make an application for a bookstore. Just for fun, let’s call our bookstore Amazonia. Amazonia will have books. There will be reviews for books. And we will want to look up books by author. There will be more features we will want to implement in our app, but this is enough for now.  

Object-oriented programming is centered on creating objects. So how do we go about translating these requirements into objects? One technique is to make a list of the nouns from our description and then refine the list to those relevant to the problem. We mentioned the following nouns in our problem:

  • Application
  • Bookstore
  • Books
  • Reviews
  • Author

Application is an irrelevant noun, so it can be discarded. We can also get rid of bookstore because it is similar to application. If we needed to do something with multiple bookstores, then we could keep it. We are left with books, reviews, and authors. (Authors has been pluralized because we will have multiple authors in this application.)

Now let’s look at how we will design each class. A class is a blueprint for creating objects. The book class we create will provide us with a blueprint for creating book objects. 

It is similar to how an architect uses a blueprint to create houses. The blueprint will show the bedrooms, bathrooms, the kitchen, and the living room. Many houses can be made from this blueprint. However, they all don’t have to be the same. Each house can be customized by changing the paint, the flooring, or the fixtures, for example.

Task

Write the classes you would use for a shopping cart program. The shopping cart should be able to do the following: 

  1. Hold a list of items.
  2. Add and remove items from the cart.
  3. Calculate the total of the cart.
  4. Get customer information.
  5. Create a receipt for the purchases.

Classes

To design our book class, we need to consider what the class is responsible for knowing and what it is responsible for doing. For a book, we need to know the title, the author, and the ISBN. These are our data attributes. 

Some things the class will need to be able to do are get and set the title, get and set the author, and get and set the ISBN. These will be the methods of the class. Here is what our book class should look like given our requirements:

By convention, the class name is capitalized. The constructor is a special function that declares and initializes the data attributes. Inside of the constructor function, attributes are added using the this keyword. Next, any methods of the class are listed without any separators.  

Methods beginning with get are known as accessor methods because they return a value. Methods beginning with set are mutator methods because they store a value or change the value of an attribute.  

This is the general form for defining a class:

A class can also be declared using this syntax:

Classes can also have static methods. A static method is a method that is a property of the class, not the object. Let’s say we want to create a static method for our book class to generate ids. This is the syntax:

To call the method:

A natural question would be when and why would you would use a static method? I can’t say I know a good reason to use static methods. It depends on how you design your class. A static method could be used as a helper method for your objects, but then such functions could be stored in their own class. If you know a good use case, leave your thoughts in the comments.

Finally, for organization, you should store a class as a module. A module is just a file that contains your code. To make our book class into a module, we add an export statement before it.

To use the Book class in another file, we import it.

Where the { } contains the values that were exported from the module, and from Book is a reference to the file Book.js.

Task

Define a class for Authors and Reviews.  

Objects

The class by itself is useless to us unless we do something with it. We want to create books. In order to do that, we have to instantiate the class. Instantiate is the technical term for creating new objects. We call the object that is created from the class an instance. This is how we would create a new instance of a book:

Objects must be instantiated with the new operator. The data passed into the object are the parameters we defined in our constructor. This is the general form for instantiating a class:

Suppose we want to add attributes to our book class like an ID, price, and the number in stock. We now have six parameters in our constructor, and that is not pretty. It’s not just bad to look at. It creates extra effort for developers using the class because they have to know the order of the arguments. A better solution is to pass an object as an argument. Example:

To instantiate the object:

In our example, we could have also accessed the title with the statement book.title because all of the attributes in the class are public. Now you may be wondering why I created all of those methods if we can access the attributes directly. Was it just to show you the syntax? Yes. Also, I want to show the benefits of organizing your code in this way. 

Bundling related code into an object is known as encapsulation. One of the benefits of encapsulation is data hiding. Data hiding means that an object’s attributes cannot be accessed outside of the class. 

In other languages such as Java and Python, we can have private attributes and private methods. Because all of our data is public by default in a JavaScript class, we can’t take advantage of this feature. Still, we should access our data with getters and setters. One convention is to prefix an attribute with an underscore _ to signal that it is private.

Task

Create a book object that uses an author object to set the author attribute.  

Final Thoughts

We have learned that a class is a blueprint for creating objects, and an object is an instance of a class. The benefit of building software into objects is that it gives a program structure and makes it more manageable. 

When we have a large program, breaking it up into objects allows the parts to be developed and maintained independently of other parts. With this modularity comes reusability. Because our code is encapsulated, the objects can be used over and over again in other parts of the program. Plus we have a unit of code that is testable. The better tested our code is, the safer it is from bugs.  

Further Reading

September 12 2017

14:00

Kotlin From Scratch: More Fun With Functions

Kotlin is a modern programming language that compiles to Java bytecode. It is free and open source, and promises to make coding for Android even more fun.  

In the previous article, you learned about packages and basic functions in Kotlin. Functions are at the heart of Kotlin, so in this post we'll look more closely at them. We'll be exploring the following kinds of functions in Kotlin:

  • top-level functions
  • lambda expressions or function literals
  • anonymous functions
  • local or nested functions
  • infix functions
  • member functions

You'll be amazed at all the cool things you can do with functions in Kotlin!

1. Top-Level Functions

Top-level functions are functions inside a Kotlin package that are defined outside of any class, object, or interface. This means that they are functions you call directly, without the need to create any object or call any class. 

If you're a Java coder, you know that we typically create utility static methods inside helper classes. These helper classes don't really do anything—they don't have any state or instance methods, and they just act as a container for the static methods. A typical example is the Collections class in the java.util package and its static methods. 

Top-level functions in Kotlin can be used as a replacement for the static utility methods inside helper classes we code in Java. Let's look at how to define a top-level function in Kotlin. 

In the code above, we defined a package com.chikekotlin.projectx.utils inside a file called UserUtils.kt and also defined a top-level utility function called checkUserStatus() inside this same package and file. For brevity's sake, this very simple function returns the string "online". 

The next thing we'll do is to use this utility function in another package or file.

In the preceding code, we imported the function into another package and then executed it! As you can see, we don't have to create an object or reference a class to call this function.

Java Interoperability

Given that Java doesn't support top-level functions, the Kotlin compiler behind the scenes will create a Java class, and the individual top-level functions will be converted to static methods. In our own case, the Java class generated was UserUtilsKt with a static method checkUserStatus()

This means that Java callers can simply call the method by referencing its generated class, just like for any other static method.

Note that we can change the Java class name that the Kotlin compiler generates by using the @JvmName annotation.

In the code above, we applied the @JvmName annotation and specified a class name UserUtils for the generated file. Note also that this annotation is placed at the beginning of the Kotlin file, before the package definition. 

It can be referenced from Java like this:

2. Lambda Expressions

Lambda expressions (or function literals) are also not bound to any entity such as a class, object, or interface. They can be passed as arguments to other functions called higher-order functions (we'll discuss these more in the next post). A lambda expression represents just the block of a function, and using them reduces the noise in our code. 

If you're a Java coder, you know that Java 8 and above provides support for lambda expressions. To use lambda expressions in a project that supports earlier Java versions such as Java 7, 6, or 5, we can use the popular Retrolambda library

One of the awesome things about Kotlin is that lambda expressions are supported out of the box. Because lambda is not supported in Java 6 or 7, for Kotlin to interoperate with it, Kotlin creates a Java anonymous class behind the scene. But note that creating a lambda expression in Kotlin is quite different than it is in Java.

Here are the characteristics of a lambda expression in Kotlin:

  • It must be surrounded by curly braces {}.
  • It doesn't have the fun keyword. 
  • There is no access modifier (private, public or protected) because it doesn't belong to any class, object, or interface.
  • It has no function name. In other words, it's anonymous. 
  • No return type is specified because it will be inferred by the compiler.
  • Parameters are not surrounded by parentheses ()

And, what's more, we can assign a lambda expression to a variable and then execute it. 

Creating Lambda Expressions

Let's now see some examples of lambda expressions. In the code below, we created a lambda expression without any parameters and assigned it a variable message. We then executed the lambda expression by calling message()

Let's also see how to include parameters in a lambda expression. 

In the code above, we created a lambda expression with the parameter myString, along with the parameter type String. As you can see, in front of the parameter type, there is an arrow: this refers to the lambda body. In other words, this arrow separates the parameter list from the lambda body. To make it more concise, we can completely ignore the parameter type (already inferred by the compiler). 

To have multiple parameters, we just separate them with a comma. And remember, we don't wrap the parameter list in parentheses like in Java. 

However, note that if the parameter types can't be inferred, they must be specified explicitly (as in this example), otherwise the code won't compile.

Passing Lambdas to Functions

We can pass lambda expressions as parameters to functions: these are called "higher-order functions", because they are functions of functions. These kinds of functions can accept a lambda or an anonymous function as parameter: for example, the last() collection function. 

In the code below, we passed in a lambda expression to the last() function. (If you want a refresher on collections in Kotlin, visit the third tutorial in this series) As the name says, it returns the last element in the list.  last() accepts a lambda expression as a parameter, and this expression in turn takes one argument of type String. Its function body serves as a predicate to search within a subset of elements in the collection. That means that the lambda expression will decide which elements of the collection will be considered when looking for the last one.

Let's see how to make that last line of code above more readable.

The Kotlin compiler allows us to remove the function parentheses if the last argument in the function is a lambda expression. As you can observe in the code above, we were allowed to do this because the last and only argument passed to the last() function is a lambda expression. 

Furthermore, we can make it more concise by removing the parameter type.

We don't need to specify the parameter type explicitly, because the parameter type is always the same as the collection element type. In the code above, we're calling last on a list collection of String objects, so the Kotlin compiler is smart enough to know that the parameter will also be a String type. 

The it Argument Name

We can even simplify the lambda expression further again by replacing the lambda expression argument with the auto-generated default argument name it.

The it argument name was auto-generated because last can accept a lambda expression or an anonymous function (we'll get to that shortly) with only one argument, and its type can be inferred by the compiler.  

Local Return in Lambda Expressions

Let's start with an example. In the code below, we pass a lambda expression to the foreach() function invoked on the intList collection. This function will loop through the collection and execute the lambda on each element in the list. If any element is divisible by 2, it will stop and return from the lambda. 

Running the above code might not have given you the result you might have expected. This is because that return statement won't return from the lambda but instead from the containing function surroundingFunction()! This means that the last code statement in the surroundingFunction() won't execute. 

To fix this problem, we need to tell it explicitly which function to return from by using a label or name tag. 

In the updated code above, we specified the default tag @forEach immediately after the return keyword inside the lambda. We have now instructed the compiler to return from the lambda instead of the containing function surroundingFunction(). Now the last statement of surroundingFunction() will execute. 

Note that we can also define our own label or name tag. 

In the code above, we defined our custom label called myLabel@ and then specified it for the return keyword. The @forEach label generated by the compiler for the forEach function is no longer available because we have defined our own. 

However, you'll soon see how this local return problem can be solved without labels when we discuss anonymous functions in Kotlin shortly.

3. Member Functions

This kind of function is defined inside a class, object, or interface. Using member functions helps us to modularize our programs further. Let's now see how to create a member function.

This code snippet shows a class Circle (we'll discuss Kotlin classes in later posts) that has a member function calculateArea(). This function takes a parameter radius to calculate the area of a circle.

To invoke a member function, we use the name of the containing class or object instance with a dot, followed by the function name, passing any arguments if need be.

4. Anonymous Functions

An anonymous function is another way to define a block of code that can be passed to a function. It is not bound to any identifier. Here are the characteristics of an anonymous function in Kotlin:

  • has no name
  • is created with the fun keyword
  • contains a function body

Because we passed a lambda to the last() function above, we can't be explicit about the return type. To be explicit about the return type, we need to use an anonymous function instead.

In the above code, we have replaced the lambda expression with an anonymous function because we want to be explicit about the return type. 

Towards the end of the lambda section in this tutorial, we used a label to specify which function to return from. Using an anonymous function instead of a lambda inside the forEach() function solves this problem more simply. The return expression returns from the anonymous function and not from the surrounding one, which in our case is surroundingFunction().

5. Local or Nested Functions

To take program modularization further, Kotlin provides us with local functions—also known as nested functions. A local function is a function that is declared inside another function. 

As you can observe in the code snippet above, we have two single-line functions: calCircumference() and calArea() nested inside the printCircumferenceAndAread() function. The nested functions can be called only from within the enclosing function and not outside. Again, the use of nested functions makes our program more modular and tidy. 

We can make our local functions more concise by not explicitly passing parameters to them. This is possible because local functions have access to all parameters and variables of the enclosing function. Let's see that now in action:

As you can see, this updated code looks more readable and reduces the noise we had before. Though the enclosing function in this example given is small, in a larger enclosing function that can be broken down into smaller nested functions, this feature can really come in handy. 

6. Infix Functions

The infix notation allows us to easily call a one-argument member function or extension function. In addition to a function being one-argument, you must also define the function using the infix modifier. To create an infix function, two parameters are involved. The first parameter is the target object, while the second parameter is just a single parameter passed to the function. 

Creating an Infix Member Function

Let's look at how to create an infix function in a class. In the code example below, we created a Student class with a mutable kotlinScore instance field. We created an infix function by using the infix modifier before the fun keyword. As you can see below, we created an infix function addKotlinScore() that takes a score and adds to the kotlinScore instance field. 

Calling an Infix Function

Let's also see how to invoke the infix function we have created. To call an infix function in Kotlin, we don't need to use the dot notation, and we don't need to wrap the parameter with parentheses. 

In the code above, we called the infix function, the target object is student, and the double 95.00 is the parameter passed to the function. 

Using infix functions wisely can make our code more expressive and clearer than the normal style. This is greatly appreciated when writing unit tests in Kotlin (we'll discuss testing in Kotlin in a future post).

The to Infix Function

In Kotlin, we can make the creation of a Pair instance more succinct by using the to infix function instead of the Pair constructor. (Behind the scenes, to also creates a Pair instance.) Note that the to function is also an extension function (we'll discuss these more in the next post).

Let's now compare the creation of a Pair instance using both the to infix function and directly using the Pair constructor, which performs the same operation, and see which one is better.

As you can see in the code above, using the to infix function is more concise than directly using the Pair constructor to create a Pair instance. Remember that using the to infix function, 234 is the target object and the String "Nigeria" is the parameter passed to the function. Moreover, note that we can also do this to create a Pair type:

In the Ranges and Collections post, we created a map collection in Kotlin by giving it a list of pairs—the first value being the key, and the second the value. Let's also compare the creation of a map by using both the to infix function and the Pair constructor to create the individual pairs.

In the code above, we created a comma-separated list of Pair types using the to infix function and passed them to the mapOf() function. We can also create the same map by directly using the Pair constructor for each pair.

As you can see again, sticking with the to infix function has less noise than using the Pair constructor. 

Conclusion

In this tutorial, you learned about some of the cool things you can do with functions in Kotlin. We covered:

  • top-level functions
  • lambda expressions or function literals
  • member functions
  • anonymous functions
  • local or nested functions
  • infix functions

But that's not all! There is still more to learn about functions in Kotlin. So in the next post, you'll learn some advanced uses of functions, such as extension functions, higher-order functions, and closures. See you soon!

To learn more about the Kotlin language, I recommend visiting the Kotlin documentation. Or check out some of our other Android app development posts here on Envato Tuts+!

  • Android SDK
    How to Use the Google Cloud Vision API in Android Apps
    Ashraff Hathibelagal
  • Java
    Android Design Patterns: The Observer Pattern
    Chike Mgbemena
  • Android SDK
    Adding Physics-Based Animations to Android Apps
    Ashraff Hathibelagal
  • Android SDK
    Android O: Phone Number Verification With SMS Tokens
    Chike Mgbemena
  • Android SDK
    What Are Android Instant Apps?
    Jessica Thornsby

12:00

Make Your JavaScript Code Robust With Flow

JavaScript was always a significant programming language, being the only language that runs reliably in the browser. Recent trends in front-end development as well as Node.js based back-end development have pushed the scale and complexity of JavaScript applications. 

Large applications developed by large teams can benefit from static type checking, which vanilla JavaScript lacks. Flow was developed by Facebook to address this issue. It is a static type checker that integrates into your development process, catches a lot of problems early, and helps you move fast.

What Is Flow?

Flow is a tool that checks your annotated JavaScript code and detects various issues that without it would be discovered only at runtime (or worse, not discovered and corrupt your data). Here is a quick example.

Flow vs. TypeScript

Before diving into the nitty-gritty details of Flow, it's worthwhile to compare it against other alternatives, and in particular TypeScript. TypeScript is a strict superset of JavaScript developed by Microsoft. Any JavaScript program is also a TypeScript program. 

TypeScript adds optional type annotations and overall serves the same purpose as Flow. However, there are some important differences. TypeScript is a separate programming language that compiles to JavaScript, whereas Flow annotations must be removed to get back to valid JavaScript. 

TypeScript has great tool and IDE support. Flow is catching up (e.g. JetBrains WebStorm has native Flow integration).

The most important philosophical difference is that Flow puts an emphasis on soundness. TypeScript 1.0 didn't catch null errors; TypeScript 2.0 with strict null checks measured up to Flow in this regard. But in other aspects such as generic containers or typing, TypeScript is more permissive and lets various categories of errors through (only structural typing is checked, not nominal typing).

TypeScript as its own language adds concepts and language features such as classes, interfaces, visibility indicators (public, private, readonly), and decorators. Those features make it easier to understand and use for people coming from mainstream object-oriented languages like C++, Java, and C#.

Installation

Since Flow annotations are not standard JavaScript, they need to be removed before deploying your application. Here is how to install flow and flow-remove-types via yarn: yarn add --dev flow-bin flow-remove-types

You can add a couple of scripts to your package.json file to automate the process:

You should run the prepublish script before publishing your code to the npm registry.

For other installation options (e.g. using npm or babel), check out the Flow installation guide.

To finish the installation, type: yarn run flow init

This will create the required .flowconfig file.

Type System

Flow has two important goals: precision and speed. Its type system was designed to support these goals.

Precision

Precision is achieved by analyzing how the code interacts with types, either annotated or inferred. Any mismatch raises a type error. Annotated types support nominal typing, which means that two different types with the same attributes are distinguished from each other and can't be substituted. The type of a variable is defined as the set of runtime values the variable may receive. 

Speed

Flow is fast due to a combination of modularity and distributed processing. Files are analyzed in parallel, and the results are merged later via efficient shared memory to accomplish full-program type checking.

Supported Types

Flow supports many types. In addition to primitive types, it also supports the following:

  • Object
  • Array
  • Any
  • Maybe
  • Variable
  • Tuple
  • Class
  • Interface
  • Generic

Type Annotations

Flow allows you to declare types as well as restrict variables and parameters to selected values:

If you exceed the valid range, you'll get an error:

You can also define complex types, including subtyping. In the following code example, the Warrior type is a subtype of Person. This means it is OK to return a Warrior as a Person from the fight() function. However, returning null is forbidden.

To fix it, let's return the younger warrior if both warriors have the same strength:

Flow allows even more precise control via class extension, invariance, co-variance, and contra-variance. Check out the Flow documentation on variance.

Configuration

Flow uses the .flowconfig configuration file in the root directory of your projects. This file contains several sections that let you configure what files Flow should check and the many aspects of its operation. 

Include

The [include] section controls what directories and files should be checked. The root directory is always included by default. The paths in the [include] sections are relative. A single star is a wild-card for any filename, extension, or directory name. Two stars are a wild-card for any depth of directory. Here is a sample [include] section:

Ignore

The [ignore] section is the complement to [include]. Files and directories you specify here will not be checked by flow. Strangely, it uses a different syntax (OCaml regular expressions) and requires absolute paths. Changing this is on the roadmap of the Flow team.

Until then, remember that the include section is processed first, followed by the ignore section. If you include and ignore the same directory and/or file, it will be ignored. To address the absolute path issue, it is common to prefix every line with .*. If you want to ignore directories or files under the root, you can use the <PROJECT_ROOT> placeholder instead of .*. Here is a sample [ignore] section:

Libs

Any non-trivial JavaScript application uses lots of third-party libraries. Flow can check how your application is using these libraries if you provide special libdef files that contain type information about these libraries. 

Flow automatically scans the "flow-typed" sub-directory of your project for libdef files, but you may also provide the path of libdef files in the [libs] section. This is useful if you maintain a central repository of libdef files used by multiple projects.

Importing existing type definitions and creating your own if the target library doesn't provide its own type definitions is pretty simple. See:

Lints

Flow has several lint rules you can control and determine how to treat them. You can configure the rules from the command line, in code comments, or in the [lints] section of your config file. I'll discuss linting in the next section, but here is how to configure it using the [lints] section:

Options

The [options] section is where you get to tell Flow how to behave in a variety of cases that don't deserve their own section, so they are all grouped together.

There are too many options to list them all here. Some of the more interesting ones are:

  • all: set to true to check all files, not just those with @flow
  • emoji: set to true to add emojis to status messages
  • module.use_strict: set to true if you use a transpiler that adds "use strict;"
  • suppress_comment: a regex that defines a comment to suppress any flow errors on the following line (useful for in-progress code)

Check out all the options in the Flow guide to configuring options.

Version

Flow and its configuration file format evolve. The [version] section lets you specify which version of Flow the config file is designed for to avoid confusing errors.

If the version of Flow doesn't match the configured version, Flow will display an error message.

Here are a few ways to specify the supported versions:

The caret version keeps the first non-zero component of the version fixed. So ^1.2.3 expands to the range >=1.2.3 < 2.0.0, and ^0.4.5 expands to the range >= 0.4.5 < 0.5.0.

Using Flow From the Command Line

Flow is a client-server program. A Flow server must be running, and the client connects to it (or starts it if it's not running). The Flow CLI has many commands and options that are useful for maintenance and introspection purposes as well as for temporarily overriding configuration from .flowconfig.

Typing flow --help shows all the commands and options. To get help on a specific command, type flow <command> --help. For example:

Important commands are:

  • init: generate an empty .flowconfig file
  • check: do a full Flow check and print the results 
  • ls: display files visible to Flow
  • status (default): show current Flow errors from the Flow server
  • suggest: suggest types for the target file

Linting With Flow

Flow has a linting framework that can be configured via the .flowconfig file as you saw earlier, through command-line arguments, or in code files using flowlint comments. All configuration methods consist of a list of key-value pairs where the key is a rule and the value is the severity. 

Rules

There are currently three rules: all, untyped-type-import, and sketchy-null. The "All" rule is really the default handling for any errors that don't have a more specific rule. The "untyped-type-import" rule is invoked when you import a type from an untyped file. The "sketchy-null" rule is invoked when you do existence check on a value that can be false or null/undefined. There are more granular rules for:

  • sketchy-null-bool
  • sketchy-null-number
  • sketchy-null-string
  • sketchy-null-mixed

Severity Levels

There are also three severity levels: off, warning, and error. As you can imagine, "off" skips the type check, "warn" produces warnings, which don't cause the type check to exit and don't show up by default in the CLI output (you can see them with --include-warnings), and "error" is handled just like flow errors and causes the type check to exit and display an error message.

Linting With Command-Line Arguments

Use the --lints command-line argument to specify multiple lint rules. For example:

flow --lints "all=warn, untyped-type-import=error, sketchy-null-bool=off"

Linting With flowlint Comments

There are three types of comments: flowlint, flowlint-line, and flowlint-next-line.

The "flowlint" comment applies a set of rules in a block until overridden by a matching comment:

If there is no matching comment, the settings simply apply until the end of the file.

The "flowlint-line" applies just to the current line:  

The "flowlint-next-line" applies to the line following the comment:

Conclusion

Large JavaScript projects developed by large teams can benefit a lot from static type checking. There are several solutions for introducing static type checking into a JavaScript codebase. 

JavaScript continues to grow in a variety of ways across the web. It’s not without its learning curves, and there are plenty of frameworks and libraries to keep you busy, as you can see. If you’re looking for additional resources to study or to use in your work, check out what we have available in the Envato marketplace.

Facebook's Flow is a recent and robust solution with excellent coverage, tooling, and documentation. Give it a try if you have a large JavaScript codebase.

September 11 2017

13:00

Build a To-Do API With Node and Restify

Introduction

Restify is a Node.js web service framework optimized for building semantically correct RESTful web services ready for production use at scale. In this tutorial, you will learn how to build an API using Restify, and for learning purposes you will build a simple To-Do API.

Set Up the Application

You need to have Node and NPM installed on your machine to follow along with this tutorial.

Mac users can make use of the command below to install Node.

Windows users can hop over to the Node.js download page to download the Node installer.

Ubuntu users can use the commands below.

To show that you have Node installed, open your terminal and run node -v. You should get a prompt telling you the version of Node you have installed.

You do not have not install NPM because it comes with Node. To prove that, run npm -v from your terminal and you will see the version you have installed.

Create a new directory where you will be working from.

Now initialize your package.json by running the command:

You will be making use of a handful of dependencies:

  • Mongoose
  • Mongoose API Query (lightweight Mongoose plugin to help query your REST API)
  • Mongoose TimeStamp (adds createdAt and updatedAt date attributes that get auto-assigned to the most recent create/update timestamps)
  • Lodash
  • Winston (a multi-transport async logging library)
  • Bunyan Winston Adapter (allows the user of the winston logger in restify server without really using bunyan—the default logging library)
  • Restify Errors
  • Restify Plugins

Now go ahead and install the modules.

The packages will be installed in your node_modules folder. Your package.json file should look similar to what I have below.

Before you go ahead, you have to install MongoDB on your machine if you have not done that already. Here is a standard guide to help you in that area. Do not forget to return here when you are done.

When that is done, you need to tell mongo the database you want to use for your application. From your terminal, run:

Now you can go ahead and set up your configuration.

The file should look like this:

Set Up the To-Do Model

Create your to-do model. First, you create a directory called models.

You will need to define your to-do model. Models are defined using the Schema interface. The Schema allows you to define the fields stored in each document along with their validation requirements and default values. First, you require mongoose, and then you use the Schema constructor to create a new schema interface as I did below. I also made use of two modules called mongooseApiQuery and timestamps.

MongooseApiQuery will be used to query your collection (you will see how that works later on), and timestamps will add created_at and modified_at timestamps for your collection.

The file you just created should look like what I have below.

Set Up the To-Do Routes

Create another directory called routes, and a file called index.js. This is where your routes will be set.

Set it up like so:

The file above does the following:

  • Requires module dependencies installed with NPM.
  • Performs actions based on the request received.
  • Errors are thrown whenever one (or more) is encountered, and logs the errors to the console.
  • Queries the database for to-dos expected for listing all to-dos, and posting to-dos.

Now you can create the entry for your application. Create a file in your working directory called index.js.

You have set up your entry file to do the following:

  • Require modules installed using NPM.
  • Output info level logs to the console using Winston Logger. With this, you get to see all the important interactions happening on your application right on your console.
  • Initialize the server and set up middleware using Restify plugins.
  • bodyParser parses POST bodies to req.body. It automatically uses one of the following parsers based on content type:
    • acceptParser accepts the header.
    • queryParser parses URL query parameters into req.query.
    • fullResponse handles disappeared CORS headers.
  • Next, you start your server and create a mongoose connection. Logs are outputted to the console dependent on the result of creating the mongoose connection.

Start up your node server by running:

Open up postman and send an HTTP POST request. The specified URL should be http://locahost:3000/todos.

For the request body, you can use this:

And you should get a response.

Conclusion

You have been able to build a standard To-Do API using Restify and Node.js. You can enhance the API by adding new features such as descriptions of the to-dos, time of completion, etc.

By building this API, you learned how to create logs using Winston Logger—for more information on Winston, check the official GitHub page. You also made use of Restify plugins, and more are available in the documentation for plugins.

You can dig further into the awesomeness of Restify, starting with the documentation.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl