This tutorial describes a basic workflow of using Klever. We assume that you deploy Klever locally on Debian 9 in the production mode with default settings from the latest master. In addition, we assume that your username is debian and your home directory is /home/debian.
Preparing Build Bases¶
After a successful deployment of Klever you need to prepare a build base on the same machine where you deployed Klever. This tutorial treats just build bases for Linux kernel loadable modules since the publicly available version of Klever supports verification of other software in the experimental stage. You should not expect that Klever supports all versions and configurations of the Linux kernel well. There is a big list of things to do in this direction.
Below we consider as an example preparation of a build base for verification of Linux 3.14.79 modules (architecture x86_64, configuration allmodconfig, GCC 4.8.5). You can try to execute similar steps for other versions and configurations of the Linux kernel at your own risks. To build new versions of the Linux kernel you may need newer versions of GCC.
You can download the archive of the target build base prepared in advance from here. Let’s assume that you decompress this archive into directory /home/debian/build-base-linux-3.14.79-x86_64-allmodconfig so that there should be file meta.json directly at the top level in that directory.
To prepare the target build base from scratch you can follow the next steps:
$ wget https://cdn.kernel.org/pub/linux/kernel/v3.x/linux-3.14.79.tar.xz $ tar -xvf linux-3.14.79.tar.xz $ cd linux-3.14.79/ $ make allmodconfig $ clade -w ~/build-base-linux-3.14.79-x86_64-allmodconfig -p klever_linux_kernel make -j8 modules
Then you will need to wait for quite a long period of time depending on the performance of your machine.
Before performing all other actions described further in this tutorial you need to sign in to a Klever web interface:
- Open page http://localhost:8998 in your web-browser .
- Input manager as a username and a password and sign in (Fig. 1).
Then you will be automatically redirected to a job tree page presented in the following sections.
As an example we consider checking usage of clocks in USB drivers. To start up verification you need to do as follows:
- Start the creation of a new job (Fig. 2).
- Specify an appropriate title and create the new job (Fig. 3).
- To configure a first job version you need to specify
- The path to the prepared build base that is /home/debian/build-base-linux-3.14.79-x86_64-allmodconfig.
- Targets, e.g. USB drivers, i.e. all modules from directory drivers/usb in our example.
- Requirement specifications to be checked, e.g. drivers:clk1 and drivers:clk2 in our example (you can see a complete list of supported requirement specifications at the end of this section).
- Press Ctrl-S when being at the editor window to save changes.
- Start a decision of the job version (Fig. 4).
After that Klever automatically redirects you to a job version/decision page that is described in detail in the following sections.
Later you can create new jobs by opening the job tree page, e.g. through clicking on the Klever logo (Fig. 5), and by executing steps above. You can create new jobs even when some job version is being decided, but job versions are decided one by one by default.
Below there are requirement specifications that you can choose for verification of Linux loadable kernel modules (we do not recommend to check requirement specifications which identifiers are italicised since they produce either many false alarms or there are just a few violations of these requirements at all):
- alloc:usb lock
- concurrency safety
- kernel:rcu:update:lock bh
- kernel:rcu:update:lock shed
- memory safety
In case of verification of the Linux kernel rather than vanilla 3.14.79, you may need to specify one extra parameter specifications set, when configuring the job version (Fig. 4), with a value from the following list:
These specification sets correspond to vanilla versions of the Linux kernel. You should select such a specifications set that matches your custom version of the Linux kernel better through trial and error.
At the beginning of the decision of the job version Klever indexes each new build base. This can take rather much time before it starts to generate and to decide first tasks for large build bases. In about 15 minutes you can refresh the page and see some tasks and their decisions there. Please, note that the automatic refresh of the job version/decision page stops after 5 minutes, so you either need to refresh it through web browser means or request Klever to switch it on back (Fig. 6).
Before the job version is eventually decided Klever estimates and provides a decision progress (Fig. 7 and Fig. 8). You should keep in mind that Klever collects statistics for 10% of tasks before it starts predicting an approximate remaining time for their decision. After that, it recalculates it on the base of new, accumulated statistics. In our example it takes 1 day and 2 hours to decide the job version completely (Fig. 9).
At the job tree page you can see all versions of particular jobs (Fig. 10) and their decision statutes (Fig. 11). Besides, you can open the page with details of the decision of the latest job version (Fig. 12) or the page describing the decision of the particular job version (Fig. 13).
Analyzing Verification Results¶
Klever can fail to generate and to decide tasks. In this case it provides users with unknown verdicts, otherwise there are safe or unsafe verdicts (Fig. 14). You already saw the example with summaries of these verdicts at the job tree page (Fig. 10 and Fig. 11). In this tutorial we do not consider other verdicts rather than unsafes that are either violations of checked requirements or false alarms (Fig. 15). Klever reports unsafes if so during the decision of the job version and you can assess them both during the decision and after its completion.
During assessment of unsafes experts can create marks that can match other unsafes with similar error traces (we consider marks and error traces in detail within the next section). For instance, there is a preset mark for a sample job that matches one of the reported unsafes (Fig. 16). Automatic assessment can reduce efforts for analysis of verification results considerably, e.g. when verifying several versions or configurations of the same software. But experts should analyze such automatically assessed unsafes since the same mark can match unsafes with error traces that look very similar but correspond to different faults. Unsafes without marks need assessment as well (Fig. 17). When checking several requirement specifications in the same job, one is able to analyze unsafes just for a particular requirements specification (Fig. 18).
After clicking on the links in Fig. 15-Fig. 18 you will be redirected to pages with lists of corresponding unsafes (e.g. Fig. 19) except for if there is the only element in this list an error trace will be shown immediately. For further analysis we recommend clicking on an unsafe index on the left to open a new page in a separate tab (Fig. 20). To return back to the job version/decision page you can click on the title of the job decision on the top left (Fig. 21). This can be done at any page with such the link.
Analyzing Error Traces¶
After clicking on links within the list of unsafes like in Fig. 20, you will see corresponding error traces. For instance, Fig. 22 demonstrates an error trace example for module drivers/usb/gadget/mv_u3d_core.ko and requirements specification drivers:clk1.
An error trace is a sequence of declarations and statements in a source code of a module under verification and an environment model generated by Klever. Besides, within that sequence there are assumptions specifying conditions that a software model checker considers to be true. Declarations, statements and assumptions represent a path starting from an entry point and ending at a violation of one of checked requirements. The entry point analogue for userspace programs is the function main while for Linux loadable kernel modules entry points are generated by Klever as a part of environment models. Requirement violations do not always correspond to places where detected faults should be fixed. For instance, the developer can omit a check for a return value of a function that can fail. As a result various issues, such as leaks or null pointer dereferences, can be revealed somewhere later.
Numbers in the left column correspond to line numbers in source files and models. Source files and models are displayed to the right of error traces. Fig. 22 does not contain anything at the right part of the window since there should be the environment model containing the generated main function but by default models are not demonstrated for users in the web interface. If you click on a line number corresponding to an original source file, you will see this source file as in Fig. 23.
You can click on eyes and on rectangles to show hidden parts of the error trace (Fig. 24-Fig. 25). Then you can hide them back if they are out of your interest. The difference between eyes and rectangles is that functions with eyes have either notes (Fig. 26) or warnings (Fig. 27) at some point of their execution, perhaps, within called functions. Notes describe important actions in models. Warnings represent places where Klever detects violations of checked requirements.
You can see that before calling module initialization and exit functions as well as module callbacks there is additional stuff in the error trace. These are parts of the environment model necessary to initialize models, to invoke module interfaces in the way the environment does and to check the final state. This tutorial does not consider models in detail, but you should keep in mind that Klever can detect faults not only directly in the source code under verification but also when checking something after execution of corresponding functions. For instance, this is the case for the considered error trace (Fig. 27).
- Specify a verdict (Bug in our example).
- Specify a status (Fixed).
- Provide a description.
- Save the mark.
After that you will be automatically redirected to the page demonstrating changes in total verdicts (Fig. 30). In our example there is the only change that corresponds to the analyzed unsafe and the new mark. But in a general case there may be many changes since the same mark can match several unsafes, and you may need to investigate these changes.
After creating the mark you can see the first manually assessed unsafe (Fig. 31). Besides, as it was already noted, you should investigate automatically assessed unsafes by analyzing corresponding error traces and marks and by (un)confirming their associations (Fig. 32-Fig. 33).
False alarms can happen due to different reasons. There are corresponding tags for most common of them. You can find a complete tree of tags at Fig. 34).(
Each tag has a description that is shown when covering a tag name (Fig. 35).
We assume that you can be non-satisfied fully with a quality of obtained verification results. Perhaps, you even could not obtain them at all. This is expected since Klever is an open source software developed in the Academy and we support verification of Linux kernel loadable modules for evaluation purposes primarily. Besides, this tutorial misses many tricky activities like development of specifications and support for verification of additional software. We are ready to discuss different issues and even to fix some crucial bugs, but we do not have the manpower to make any considerable improvements for you for free.
|||If this is not the case, you should adjust paths to build bases below respectively.|
|||You can open the Klever web interface from other machines as well, but you need to set up appropriate access for that.|
|||For the considered example each task is a pair of a Linux loadable kernel module and a requirements specification. There are 3355 modules under verification and 2 requirement specifications to be checked, so there are 6710 tasks in total.|