Saturday, October 22, 2016

cmocka and cmake

I am using cmocka as the unit test framework to illustrate the concepts used in my book “a Guide to Success Unit Tests”.

cmocka relies on cmake build system to help it build the cmocka dynamic library as well as the test scripts into executables. So cmake will have to be installed as well as cmocka for it work correctly.

cmake can be downloaded from http://www.cmake.org/download/
cmocka can be downloaded from https://cmocka.org/files/

The I am using a MacOS computer, so my references are from using that computer.

Installing CMake

When installing cmake, many different packaging versions can be downloaded. Since my computer is a MacOS, I downloaded the .dmg package. After I have extracted and installed it, I added it path the cmake executable.

export PATH=/Applications/CMake.app/Contents/bin:$PATH

I test that it can find it by getting it to report is version number.

localhost:~ tehnyitchin$ ccmake --version
ccmake version 3.3.0

CMake suite maintained and supported by Kitware (kitware.com/cmake).
localhost:~ tehnyitchin$


Checking cmocka is working

Once you have downloaded cmocka and unpacked it, you can test if the cmake and cmocka are going to work correctly. The file README has the correct instructions. In the example below, I have unzipped cmocka into a directory called cmocka.

cd cmocka
mkdir build
cd build
cmake ../.
make


After the build, the test binaries are in subdirectory example. Executing one of the test binaries will execute the test scripts. Below is a execution of simple_test

localhost:example tehnyitchin$ ./simple_test
[==========] Running 1 test(s).
[ RUN ] null_test_success
[ OK ] null_test_success
[==========] 1 test(s) run.
[ PASSED ] 1 test(s).
localhost:example tehnyitchin$




I am writing a Guide to Successful Unit Tests, you can get it here at Leanpub

Installing gcc on an oldish OSX so I can get code coverage to work

On one my computers that I am using, it is running OSX 10.8.5, this is known as Mountain Lion. On this computer, I want to install the gcc on it, but I have some difficulties in find the right instructions.

The default gcc that is install is an alias to

localhost:bin tehnyitchin$ ll gcc*
lrwxr-xr-x  1 root  wheel  12 Aug 18  2013 gcc -> llvm-gcc-4.2


So it is not really running the real gcc that you can get from GNU. Scouring the internet, I came across these instructions at Helsinki University Geodynamic Group.

https://wiki.helsinki.fi/display/HUGG/Install+for+older+versions+of+Mac+OS+X

I am working on this as I want to get code coverage working. It is not working at all with the default llvm-gcc-4.2. Compiling the code with a --coverage causes nothing to occur. 

Typically, when the code is compiled with --coverage, the .gcno file is also generated. gcov uses this file as part of its code coverage measurements. If I enable the -v option to increase its verbosity, I can see that it is linking the gcov library, but I am not sure if it is actually doing it.

This question on Stack Overflow shows that the code coverage on my OSX is not working.
http://stackoverflow.com/questions/8622293/no-code-coverage-with-mac-os-x-lion-and-xcode-4-llvm-g-4-2

As I execute a simple main.c, I am not getting the correct .gcno. However, when I switch it from gcc to clang, the .gcno files are generated.

Back to the story, After downloading xcode-4 as per the instruction from Helsinki, it failed to installed due to some unknown reason. The logs did not show anything obvious. I did not continue with installation as clang was generating the code coverage files for me. My feeling is that Apple has built llvm-gcc-4.2 without code coverage support. To overcome this, the real gcc is needed. Luckily, in my case, clang came to the rescue.


I am writing a Guide to Successful Unit Tests, you can get it here at Leanpub. 

Unit Test code and technical debt

Technical debt has many different meaning. It depends on the context that it is used in. Wikipedia has a nice statement that I quite like.

“The debt can be thought of as work that needs to be done before a particular job can be considered complete or proper. If the debt is not repaid, then it will keep on accumulating interest, making it hard to implement changes later on.”  – [0]

Unit tests is one area where technical debt can be easily obtained, and the amount of debt can grow exceptionally quickly. It definitely needs to be forethought and well planned before it is used in full force. The following questions must be answered to fully.

What language will the test code be implemented in? – This is critical contributor to technical debt. If you use a language which the majority of your development team is not familiar with, the chances of increasing technical debt is quite high. As the language is not well known by your team, training will need to be provided to it and the future team that will be maintaining the code. For example, implementing the test code in Java while the code under test is in C means that your development team must know Java and C to deliver good software.

Is the infrastructure of your unit test environment well supported? – Using a unit test framework that does not have a well supported community (either through the online community or supported commercially) means that the support of it must be carried out by your development team. When the code enters maintenance phase, the unit test framework still needs to be maintained, as specially as the computers and operations system it is using starts to revised to newer versions. This particularly true if the code has a long life time, as it is true with most embedded software. As more test cases are being implemented, the amount of technical debt gets larger and it makes supporting the unit test framework quite difficult.

Is the unit test framework integrated well with your tool chain? – Using a unit test framework that is not well integrated in your tool chain means that the result from you test runs must be analysed manually. For a small project, manual analysis of the the test result is OK and relatively easy. However as the project gets bigger with a large code set with high level complexity, good integration into your tool chain is a must. This will ensure that the results are easily analysed, and the correct notifications are generated. It will ensure continuous integration can be achieved easily and automatically. It will also means that removing manual work will also reduce the risk of performing the manual process incorrectly.

By answering these questions, there is a chance that your technical debt could become technical credit.


I am writing a Guide to Successful Unit Tests, you can get it here at Leanpub.

References
[0] – https://en.wikipedia.org/wiki/Technical_debt

Should it be the same or different?

The language of the unit test framework is extremely important. It is just as important as the language of the software module. In fact, the test scripts that are written is a piece of code that needs to be managed. So ideally, the principles of software engineering also applies to the test scripts as well.

To reduce the complexity in your test scripts, it would be ideal if the test scripts are written in the same language as the software module under test. If you are writing your software module in C, your test scripts should also be written in C. If you are using rust, then rust should be used for the test scripts.

The advantage of using the same language for both the software module and test script, it is compiled for testing on the target. The big assumption is that the language is already supported by your target. In some cases, this is not. If you are developing using embedded Linux and your test scripts are in written in python, the chance of your version of python being supported on your target is pretty good. However, if you are targeting an 8bit Atmel AVR, getting python support on the target is pretty slim. If the test script is in the same language as the software module for your target, you are pretty certain that test on target is possible.

The other advantage of using the same language is the ability to debug. In a mixed language environment, the chance of debugging your test script as well as your software module is pretty small. For example, the software module is written in C and the test script is also written in C. When the binary is created and executed, it can be easily debugged using gdb or another debugger. It can be debugged from entry into main() until it exits main(). The variables can also be inspected and modified during debugging quite easily. In a mixed language environment, the variables public names may be mangled and that would make it difficult to find for inspection or even to modified.

To sum up, if you have no compelling reason to not keep all of your test script and software module in the single language domain such as C, do not mix your language.

I am writing a Guide to Successful Unit Tests, you can get it here at Leanpub.

Write the code first, test code that is.

In a software development process, certain steps must be taken to ensure that your software module is delivered as bug free as possible. The aim is for it to be working without any bugs. However, the question is what bug free is exactly. One possible definition is the software module is consider bug free if it fulfills all its requirements, functional and non-functional. Beyond that definition, it is hard to control the injection of bugs from outer sources. For example, if the compiler is not generating the code and causes a mis-behaviour of the software module. Technically, this is a bug for the product and will still to be fixed, but it is hard to control if the behaviour of the compiler is not well known.
So with that definition, the steps in the process are clear. First, the requirements are finalised, and then the implementation can start. Since the requirements are finalised, it would make sense for the test code to be written first. From the requirements, the functional behaviour of the software module is well defined. This means that the input data into the software module is known as well as the results from it. If these are not clear or have some doubts, it is a sign that the requirements are not clear enough. The requirements need more work.

Once the test code are written and executed. Obviously, all the test cases will fail as the code for the software module is not written yet. Now is the write time to start writing your code for the software module. As you continue to write the software module, the test cases are used as a validation for the software module. The implementation continues until all your test cases passes.

Suppose the code for software module is written before the test code. The downside is that the chance of getting your software module correct is low and you won’t know it until you have some test code to do the testing. You have just invested into some technical debt. If your test code is written first, it is very likely to be correct the first time as all the inputs, all the outputs and the behaviours are well defined.

Save time and effort by writing the test code first, and then write the code for your software module.

I am writing a Guide to Successful Unit Tests.
You can get it here at Leanpub. or here at Gumroad. and read about these topics and more.

Where are your input values coming from?

When design your test cases, you will have to decide what type of input values are want to use, but consider where your input values are coming form. The most obvious way of inject values are by passing into via the function parameters. It is logical and it is the formal method. However, that is only one of the ways of injecting values into it. The three most popular methods are.

Function Parameters -Function parameters are part of full definitions of the function. It is one of the fundamental ways of passing data into the function. The range of values that it can take are defined by its type. Function Parameters can also be used to get values out of the function. In C, the function parameters live on the stack memory, and its life time ends when the function ends.

Returned Values – The low level functions called by the your module under tests must be considered as an input. From a unit test perspective, the return values can be influenced via the mocked function. The range of values is dependent on its data type. In C, the function parameters live on the stack memory, and its life time ends when the function ends. The other thing to consider is that this method does not necessary inject the values at the start of your function under test.

Global Variables – variables used in the software modules that are accessible outside of the modules. This method is best avoided in complicated code bases as it tends to create a software module that is prone to unpredictable behaviour if the global variables are not controlled well. The global variable lives on the heap, and it is alive as long as the program is alive.

For most parts, the input methods comes down the how you have design your module. Some design guideline forbid the use of mechanism such as global variables, so make sure that your methods matches your design guidelines.

I am writing a Guide to Successful Unit Tests.
You can get it here at Leanpub. or here at Gumroad. and read about these topics and more.

Rust approach to tests

About nine months ago, I started to hear good things about a new language called Rust. It really gain traction for me when I read a blog post by Gregely Imreh (@imrehg) who wrote about it as his language of the month. The two things that I read about Rust are that it is fast and it is safe. Both designed from the outset with these two items in mind. For a system level language, this is great!

I started reading the rust docs and was pleasantly surprise that unit test is a built in design feature of language. I was really surprise at this. The thing with most languages, unit test is an add on feature that is implemented by another vendor, or another group. Look at C, apart from the fact that its behavior is sometimes unpredictable, a unit test framework is not even defined. You will have to use cmocka, or Unity or one of the other many C test frameworks. Same with python, php, Java etc.

All the details about testing on Rust are show in this chapter in this book.
The more I read about Rust, it strikes me the designers are seasoned C language developers, suffered through many projects with the deficiency of C, and setup to design a language with those deficiency out of the language. Rust is looking really good as a replacement for C or C++.

I am writing a Guide to Successful Unit Tests.
You can get it here at Leanpub. or here at Gumroad. and read about these topics and more.

Blinking LEDs

In the embedded world, blinking a LED is the equivalent to the “Hello World” program. The simple task of blinking a LED goes a long way to proving that you have work flow that works. It is a huge milestone for the project. This activity is sometimes known as bringing up the board. It is when a new board is provided and software is executed on it.

For those who are not familiar with the embedded concept, blinking a LED proves that your tool chain is good, your understanding of accessing the hardware is sound as well.
The most simple implementation of the blinking led application is

int main(void)
{
    PORTA.0.cfg = PUSH_PULL_OUTPUT;
    while (1)
    {
         PORTA.0.data ^= 0X01;
    }
    return 0;
}

The code will exclusive ORed the current value on PORTA.0, effectively flipping the bit. If are able to build this code and programmed it into your board, you can put a CRO on pin PORTA.0 and see a nice signal toggling up and down. The frequency and duty cycle are defaulted to the fastest setting possible.
If you able to see a toggling signal, you have reached a significant milestone. The hardware is considered good enough to start developing code upon. Lets break it down.
Your compiler and linker setup. By seeing the toggling signal on PORTA.0, it means that your tool chain has created binary code that your microcontroller can understand. It also means that your tool chain has linked all the binary, resolved the addresses in your code correctly, and that your specified memory map allows the microcontroller to boot. Getting the memory map is a tricky task as you have address locations and sizes correct for items such as stacks, heap, constant data etc.
Your debugging setup. By seeing the toggling signal, it shows that you can program the microcontroller with your debugger setup and trigger the code. If your debugger setup to the microcontroller is via JTAG, you can set break points and inspect variables and memory addresses. This is a big confidence booster to the developers as it provides them the tools to work on the problem.
Your hardware. By seeing the toggling signal, it shows that your hardware platform is powered correctly and is able to provide you with a stable platform to start developing code.
If you are wondering if the above code will actually blink a led. Well, unless the frequency of your microcontroller is quite low, it will probably not illuminate a LED. The frequency of the output is probably way too high. To have it blink, replace the while(1) loop with code that is tied to a timer event that occurs every 500ms. This will prove the hardware further. You will also have to check if the output pin can drive enough current it to the LED for illumination. Otherwise, you will have to drive transistor or a FET to drive the LED. 

One critical thing to remember when bringing up your board; the project is still very young. Even though you have proved some parts of your development flow, tool chain and development hardware, the rest of pieces of the puzzle are yet to be solved. However, it is a big step just to be able to blink a led. 

I am writing a Guide to Successful Unit Tests.
You can get it here at Leanpub. or here at Gumroad. and read about these topics and more.

Can a unit test replace a debugger?

I started working in embedded systems in 1992. At that time, I have wondered if it is possible to get your code working without the need of a debugger.

The debuggers I was using when I started was ICE [0]. By this I meant that the microcontroller was replaced with a special bonded out chip, and its clocking was controlled. The ICE allow me full access to microcontroller’s operation and peripherals, almost down the controlling each clock cycle. I was able to see everything the microcontroller was able to see. Sometimes, the cache on the ICE was large enough that I was also able capture a log of the instruction sequences. This is very helpful when I had to double check the execution part leading up to a beak point.

These days I work in a large engineering team with 32bit devices. The debuggers are usually interfaced via the JTAG connector. Although I can use it to debug my code, the control is not as fine as that of an ICE. The direction seems to be heading towards higher level of abstraction. Only use a debugger when it is necessary to. With a typical layered architecture, this is workable.

Extrapolating it further, is it possible to do a full project without any access to a debugger? In many projects, these is already occurring. The large amount of work from the ESP8266 [1] scene, the majority of the code are developed without a debugger. Yet this is in the hobbyist or prototype domain. What about the professional domain? To remove the debugger from the development tool chain would be difficult, especially if I am debugging code that is close to the metal. However, the higher up the software stack my code is residing in, reliance on it is getting smaller.

Using a unit test frame, the main advantage is to de-couple its dependency from the hardware and its maturity level. It may be possible for the software to be completed and released before the hardware is manufactured. Provided the behavior of the hardware is well known, this is a good strategy for reducing development time.

For the debugger, its major strength is to allow the code to be debugged close to the microcontroller. If I need to double check any data from the hardware, a debugger is essential. This is one area a unit test framework can never accomplish. The hardware may be mocked, but the mocked behavior will need to be verified against the hardware.

[0] – In circuit emulator. http://www.ganssle.com/articles/BegincornerICE.htm
[1] – https://en.wikipedia.org/wiki/ESP8266


I am writing a Guide to Successful Unit Tests.
You can get it here at Leanpub. or here at Gumroad. and read about these topics and more.

Why you need to measure test coverage?

I was at a meetup recently and I asked the question on what tools are available for measuring coverage data during a test run for an up and coming programming language. Some comments were made that coverage measurements are not that important in the scope of unit testing. I was rather taken back by this comment. Coverage measurements must be part of the data that is used how determine how successful the testing was.
To get useful measurement of coverage, lets go through what is meant by coverage. From a testing perspective, if a requirement can be validated by tests, the requirement is covered by tests. The validation comes by testing the implementation’s behaviour is the same as the requirement.

Stuff that needs coverage

Executed code – There are code that you write that are absolutely critical to the functionality of your product. Software modules such as your scheduler or startup code must be covered. These modules are core. There are also code that will be executed but its functionality are not so critical. An example might be a module that performs a maths calculation, its accuracy is important but the time it takes is not so important. Both types of code must be unit tested as they are executed.

Safety Critical Code – These code performs functions that are critical to the safety of the user. This might include software function dealing with air bag system in a car. These type of code must be covered to ensure its correct behavior.

Code that will be audited – Some of the code written will be audited to ensure that it conforms to a set of design principles. The validation of these code are based on the auditing.

Safe path execution – not only should the code execution be covered, the path which the execution takes should also be covered. For code dealing with functional safety, the safe execution path must be mapped out when a failure occurs. In this context, failure means a failure that can be detected and managed. For example, the code detects that some critical memory has become corrupted, so the execution path to recover or to contain the corruption must be checked.

I have only given four examples where coverage is a very good tool to use. It is the only the metric to ensure that all of your code is tested.


I am writing a Guide to Successful Unit Tests.
You can get it here at Leanpub. or here at Gumroad. and read about these topics and more.

Making sure that your unit testing is painless to run.

To reduce the obstacle of using unit tests in your development flow, it is a good idea to integrate it as part of your development work flow. The lower the barrier to setting it up, the more likely unit testing is going to be used.

The classical development flow is to have your design specifications stabilised and ready for implementation. In reality, at this early stage, stabilisation has different meaning to when the software module is ready for release. The importance of having the public functions stabilised is critical here. There are two reasons why.

The first one is that the users of your functions can rely on those published function signatures and have an understand of its behaviour. This is obvious and is one of the tenets of modern software development.

The second is if your unit testers are not the actual developers, the unit testers can start working on the test code once the public functions are known. There will be some adjustments to function calls as your module is being used.

However, the main key for using unit test in your development flow is to run the test with minimal effort.

When you are doing a build of your module, also include a step in your build process to execute the tests. In your build process, create a dependency of unit test runnable based on your source code and unit test code. As soon as any of those changes, your unit test will get built and executed.

Any mismatch between the unit test code and your module will get picked up. By having your unit test code written by another person, it also creates a method of validating the design documentation. It checks that both code and documentation are aligned.
By making the execution as one of the steps in your build process, it is essential to make a decision whether you want to do when the unit test fails. There are two obvious ways.

The most obvious ways is to treat the unit test fails as warnings and allow the build to complete. I am not a big fan of this method as it puts a lesser importance of unit tests.
The more discipline way is to treat unit test fails as build errors and stop the build process from moving onto the next stop until the errors are fixed. This is much stricter method and is a more discipline method of software development.

Whatever build system you are using, spend some time to integrate the building and executing of unit tests into your build sequences. It will be worth the effort in the long run.

I have written a Guide to Successful Unit Tests.
You can get it here at Leanpub. or here at Gumroad. and read about these topics and more.