../_images/cartesiam_logo.png

NanoEdge AI Studio

../_images/banner.png
NanoEdge AI Studio, Library and Emulator: use the Studio to find the best Library.
Before embedding, test your Library’s performances locally using its clone, the Emulator.

I. What is NanoEdge AI Library?

NanoEdge AI Library is an artificial intelligence static library developed by Cartesiam, for embedded C software running on ARM Cortex microcontroller.

When embedded on microcontrollers, it gives them the ability to easily “learn” and “understand” sensor patterns, by themselves, without the need for the user to have additional skills in Mathematics, Machine Learning, or Data science.

The NanoEdge AI static library is the code that contains an AI model (for example, as a bundle of signal treatment, machine learning model, optimally tuned hyperparameters, etc.) designed to gather knowledge incrementally during a learning phase, in order to become able to detect potential anomalous machine behaviors, and possibly predict them.


II. Purpose of NanoEdge AI Studio

1. What the Studio does

NanoEdge AI Library contains a range of machine learning models, and each of those models can be optimized by tuning a range of (hyper)parameters. This results in a very large number of potential combinations (static libraries), each one being tailored for a specific application.

NanoEdge AI Studio is like a search engine, built for embedded developers; its purpose is to find the best NanoEdge AI static library possible for your final hardware application (i.e. the piece of code that contains the most relevant machine learning model to your application, tuned with the optimal parameters), in a way that doesn’t require you to have advanced skills in Mathematics, Statistics, Data Science, or Machine Learning.

Each NanoEdge AI static library is the result of the benchmark of virtually all possible AI libraries (combinations of signal treatment, ML model, tuned hyperparameters), tested against the datasets given by the user. It is the result of the comparison of all possible methods of learning, given the user’s data.

Using NanoEdge AI Studio, you will be able to quickly and easily generate an AI library, in the form of a static .a file, that provides smart functions (learn, detect, …) as building blocks to implement smart features into your C code, to be embedded in your microcontroller.

2. What the Studio doesn’t do

In a nutshell, NanoEdge AI Studio takes user data as input (at least 2 .csv files), and produces a static library (.a) file as output. This procedure is straightforward and relatively quick.

However, the Studio doesn’t provide any input data. The user needs to have qualified data in order to obtain satisfactory results from the Studio. These data can be raw sensor signals, or pre-treated signals, and need to be formatted properly (see below). For example, for anomaly detection on a machine, the user needs to collect signal examples of “normal” behavior on this machine, as well as a few examples (non-exhaustive) of “anomalies”. This data collection process is crucial, and can be tedious, as some expertise will be needed to design the correct signal acquisition and sampling methodology, which can vary dramatically from one project to the other.

Additionally, NanoEdge AI Studio doesn’t provide any ready-to-use C code to implement in your final project. This code, which will include some of the NanoEdge AI Library’s smart functions (such as initialize, learn and detect), needs to be written and compiled by the user. The user is free to call these functions as needed, and implement all the smart features imaginable.

In summary, the static (.a) library file, outputted by the Studio from user-generated input data, will have to be linked to some C code written by the user, and compiled/flashed by the user on the target microcontroller.


III. Getting started

1. Running NanoEdge AI Studio for the first time

When running NanoEdge AI Studio for the first time, you will be prompted for:

  • Your proxy settings: if you’re using a proxy, use the settings below, otherwise, click NO.

    Licensing API: Cartesiam API for library compilation:
    104.31.76.187 40.113.111.93
    104.31.77.187  
    or via URL: https://api.cryptlex.com:443 or via URL: https://apidev.cartesiam.net
  • The port you want to use.

    It can be changed to any port available on your machine (port 5000 by default).

  • Your license key.

    If you don’t know your license key, log in to the Cryptlex licensing platform to retrieve it.
    If you have lost your login credentials, reset your password using the email address used to download NanoEdge AI Studio.

    Note

    If you don’t have an Internet connection, offline activation is available:

    1. Choose Offline activation and enter your license key.
    2. Copy the long string of characters that appears.
    3. Log in to the Cryptlex licensing platform.
    4. Reset your password using the email address provided when downloading NanoEdge AI Studio.
    5. Log into your Cryptlex dashboard using your new password.
    6. Click on your license key, then Activations and Offline activation
    7. Click ACTIVATION, then paste the string of characters copied in step 2, and click Download response.
    8. In NanoEdge AI Studio, click Import file and open the downloaded .dat file.

2. Preparing signal files

During the library selection process, NanoEdge AI Studio uses user data (input files containing signals) to test and benchmark many machine learning models and parameters. The way those input files are structured, formatted, and the way the signal were recorded is therefore very important.

i. Formatting input files properly

Each input file contains several temporal signals, each made of samples.

Signals

Each line in the input file, corresponds to an observation of a signal during a given time. It is a signal example.

Each of these line comes from a sampling process that involves reading values from a signal at defined intervals, generally constant, producing a series of discrete values called samples.

Important

In NanoEdge AI Studio, each lines will be taken into account independently, iteratively, so they must represent a meaningful snapshot in time of the signal to be measured. It it therefore crucial to set a coherent sampling frequency and a proper buffer size (see next section).

The number of lines will determine how many signal examples our search algorithms will treat in total. Realistically, never use fewer than 20-50 lines per file, or millions of lines. A realistic range would be a hundred to a few thousands of lines.

Samples

During sampling, the signal’s values are read, generally at regular intervals. We recommend keeping a constant sampling frequency during data acquisition, regardless of the final application.

The number of samples per signal, or buffer size, is set by the user depending on this sampling frequency (see next section), and depends on each particular use case. You need enough samples to cover the whole physical phenomenon studied, to get a proper and meaningful signal snapshot.

Note

  • 1-axis sensor (e.g. pressure) with buffer size 256; file has 256 values per line.
  • 3-axis sensor (e.g. vibration) with buffer size 256; file has 256 x3 = 768 values per line.

Important

Samples are the numerical values that constitute a signal (a line). The number of samples per line, or buffer size, must stay constant across data imported in the Studio that you wish to consider together (e.g. a normal / abnormal signals couple, see Studio: Importing signals).

Whenever possible, please use buffer sizes that are powers of two (e.g. 128, 1024…).

The allowed separators are: single space, tab, , and ;.
Please format decimal values using a period (.) and not commas (,).

Example

Here is an example of signal file corresponding to a 3-axis sensor, e.g. a collection of m signal examples (m readings, or lines) on a 3-axis accelerometer with a buffer size of 256 on each axis, where each numerical value is separated by a single space:

../_images/input_example1.png

Warning

Depending on project constraints, buffer size, signal lengths, and sampling frequencies will vary.
For example, for a buffer size of 256, it could mean that:
  • we needed to capture 0.25-second signals, with a sampling frequency of 1 kHz, so we chose a buffer size of 256 (256/1000 = 0.256).
  • we needed to sample at a higher frequency (4 kHz), so with a buffer size of 256, our signals will be much shorter, 64 ms (256/4000 = 0.064).

ii. Using more than one sensor (multi-sensor)

In NanoEdge AI Studio (since v1.2.0), multi-sensor capability has been implemented. It is now possible to find libraries that use multiple sensors as input (e.g. 3-axis magnetometer + temperature sensor + pressure sensor, for a total of 5 variables).

Multi-sensor can be selected on the project creation screen, where you would normally select your sensors:

../_images/multi-sensor.png

Make sure that you select the correct number of variables.

Warning

  • Multi-sensor works very differently, compared to other traditional “single” sensors. It is intended for very precise use cases, when the user need to read and monitor multi-sensor machine states.
  • It is not designed for time series, to monitor buffers of temporal data. Instead, it can be used with higher-level features (e.g. mean, min, max, stdev…), extracted from these buffers, to aggregate them into states that can be read from time to time, but not continuously.
  • Therefore, multi-sensor is not intended to be used, for example, with accelerometer buffers, rapidly varying current/voltage buffers, or any kind of temporal data.

Important

  • When using multi sensor, each line (or signal example) only contains as many values as you have variables.
  • In other words, each line represents one single sample, or one single state (whereas for “single” sensors, each line represented a succession in time of many samples).
  • In summary:
    • In mono-sensor, you have NUMBER_OF_AXES * BUFFER_LENGTH values per line.
    • In mono-sensor, a line represents a signal snapshot consisting of several samples.
    • In multi-sensor, you only have NUMBER_OF_VARIABLES values per line (effectively, a buffer length of 1).
    • In multi-sensor, a line represents a state, not a full temporal signal snapshot anymore.

Example:

For example, for an application where a state can be represented via a combination of magnetism, temperature, and pressure: we can aggregate data from a 3-axis magnetometer, a (1-axis) thermometer, and a (1-axis) pressure sensor. Temperature and pressure, if they vary slowly, can be read directly, but magnetometer data needs to be summarized using (for example) average values across a 50 millisecond window along all 3 axes.

This would result in 3 extracted magnetic features, followed by temperature, followed by pressure, to represent a 5-variable state.

../_images/input_example_multi1.png

We could also imagine building a more complex state from our 50 millisecond magnetometer buffer, including not only average magnetometer values, but also minimums and maximums, for all 3 axes. This would result in 3x3 = 9 extracted magnetometer values (3 each for average, minimum, maximum), followed by temperature and pressure, to represent a 11-variable state.

../_images/input_example_multi2.png

iii. Choosing a relevant sampling frequency and buffer size

To prepare input data (except when using multi-sensor), it is crucial to choose the most adequate sampling frequency and buffer size for your sensors.

The sampling frequency corresponds to the number of samples measured per second. For some sensors, the sampling frequency can be directly set by the user, but in other cases, a timer needs to be set up for constant time intervals between each sample.

The speed at which the samples are taken must allow the signal to be accurately described, or “reconstructed”; the sampling frequency must be high enough to account for the rapid variations of the signal. The question of choosing the sampling frequency therefore naturally arises:

  • If the sampling frequency is too low, the readings will be too far apart; if the signal contains relevant features between two samples, they will be lost.
  • If the sampling frequency is too high, it may negatively impact the costs, in terms of processing power, transmission capacity, storage space, etc.

Important

To choose the sampling frequency, prior knowledge of the signal is useful in order to know its maximum frequency component. Indeed, to accurately reconstruct an output signal from an input signal, the sampling frequency should be at least twice as high as the maximum frequency that you wish to detect within the input signal.

Without any prior knowledge of the signal, we recommend testing several sampling frequencies and refining them according to the results obtained via NanoEdge AI Studio / Library (e.g. 200 Hz, 500 Hz, 1000 Hz, etc.).

The issues related to the choice of sampling frequency and the number of samples are illustrated below:

  • Case 1: the sampling frequency and the number of samples make it possible to reproduce the variations of the signal.

    ../_images/sampling-freq-1.png
  • Case 2: the sampling frequency is not sufficient to reproduce the variations of the signal.

    ../_images/sampling-freq-2.png
  • Case 3: the sampling frequency is sufficient but the number of samples is not sufficient to reproduce the entire signal (i.e. only part of the input signal is reproduced).

    ../_images/sampling-freq-3.png

The buffer size corresponds to the total number of samples recorded per signal, per axis. Together with the sampling frequency, they put a constraint on the effective signal temporal length.

Important

In summary, there are 3 important parameters to consider:

  • n: buffer size
  • f: sampling frequency
  • L: signal length

They are linked together via: n = f * L. In other words, by choosing two (according to your use case), the third one will be constrained.

Here are general recommendations. Make sure that:

  • the sampling frequency is high enough to catch all desired signal features. To sample a 1000 Hz phenomenon, you must at least double the frequency (i.e. sample at 2000 Hz at least).
  • your signal is long (or short) enough to be coherent with the phenomenon to be sampled. For example, if you want your signals to be 0.25 seconds long (L), you must have n / f = 0.25. For example, choose a buffer size of 256 with a frequency of 1024 Hz, or a buffer of 1024 with a frequency of 4096 Hz, and so on.

Note

For best performances, always use a buffer size n that is a power of two (e.g. 128, 512…).


IV. Using NanoEdge AI Studio

In order to generate a static library, NanoEdge AI Studio walks the user through several steps:

  1. Creating a new project and setting up its parameters.
  2. Importing “regular signal” files into the studio.
  3. Importing “abnormal signal” files into the studio.
  4. Running the library selection process.
  5. Testing the best library found by the Studio.
  6. Compiling and downloading the library [Full version / Featured boards only]
../_images/all_steps.png

1. Creating a new project

In the main window, you can:

  • Create a new project
  • Load an existing project
../_images/home_settings_create.png

Navigation:

home Back to project creation / project loading.
settings Settings: Proxy, License, Port, Language
documentation Documentation
log Check the local .log files
bug Report a bug via the Freshdesk platform.

Project creation:

  • Enter name and description;

  • Choose the microcontroller type;

  • Choose the maximum amount of RAM to be dedicated the Machine Learning algorithms;

  • Choose the sensor type used to collect data (with the correct number of axes);

  • Click CREATE.

    ../_images/2_project_details.png

Note

  • Most ARM Cortex M are supported: M0, M0+, M1, M3, M4, M23, M33 and M7.

  • RAM values are in kB (ypically 16 or 32 kB).

  • When selecting one of the featured boards (e.g. NUCLEO-F401RE or NUCLEO-L432KC or STM32L562QE), you will be able to download a prototyping library, even with the Free/Trial version of NanoEdge AI Studio.

    ../_images/2_featured_boards.png

2. Importing signal files

In these two next steps (Step 2: Regular signals and Step 3: Abnormal signals), you will import your signals data.
These data can either come from a file, or be logged live using a Serial port.

There are two types of signals required:

  • The Regular signals, corresponding to nominal machine behavior, i.e. data acquired by sensors during normal use, when everything is functioning as expected.

    ../_images/22_screen2_top.png

    Please include data corresponding to all the different regimes, or behaviors, that you wish to consider as “nominal”. For example, when monitoring a fan, you may need to log vibration data corresponding to different speeds, possibly including the transients.

  • The Abnormal signals, corresponding to abnormal machine behavior, i.e. data acquired by sensors during a phase of anomaly.

    ../_images/33_screen3_top.png

    The anomalies don’t have to be exhaustive. In practice, it would be impossible to predict (and include) all the different kinds of anomalies that could happen on your machine. Just include examples of some anomalies that you’ve already encountered, or that you suspect could happen. If needed, don’t hesitate to create “anomalies” manually.

    However, if the Library is expected to be sensitive enough to detect very “subtle anomalies”, we recommend that the data provided as abnormal signals includes at least some examples of subtle anomalies as well, and not only very gross, obvious ones.

Important

These signals files are only necessary to give the benchmark algorithms some context, in order to select the best library possible.

At this stage, no learning is taking place yet. In later stages, after the optimal library is selected, compiled, and downloaded, it will be completely fresh, brand new, untrained, and will have no established knowledge.

The learning process that will then be performed, either via NanoEdge AI Emulator, or in your embedded hardware application, will be completely unsupervised.

i. Importing from file

Please make sure that your input files are formatted properly (see Studio: Formatting input files).

  • Click Select file, and choose a valid input file.

  • Select the separator you are using.

  • Validate import.

    ../_images/import_signals_file.png

If your input file is valid, you will be able to import it. Otherwise, please double check your data (numerical values, uniform separators, constant number of samples per line…).

ii. Importing “live” from Serial port (USB)

It is possible to import signals directly within the Studio, by logging it through your computer’s Serial port (USB).

Note

If you need to open a serial / COM port on Linux, check the FAQ (Section II. 9) for instructions.

You need a USB data logger in order to do it. For instructions on how to make a simple data logger, check our tutorials: Smart vibration sensor and Smart current sensor, under section III. Making a data logger.

  • Select your Serial / COM port. Refresh if needed.

  • Choose your preferred baudrate.

  • If needed, select a maximum number of lines to be recorded.

  • Click the red “Record” round button to start the data logging.

  • Click the grey “Stop” square button to interrupt the logging.

  • Choose your delimiter.

  • Validate import.

    ../_images/import_signals_serial.png

If your data is valid, you will be able to import it. Otherwise, please double check your data logger parameters.

Warning

  • Your data logger must output lines containing a constant number of samples per line, all separated by the same separator.
  • Your data logger must also output signals one line at a time to be coherent with the way input files are formatted, see the Formatting section.

iii. Checks, errors and warnings

Supported file formats are .txt / .csv. Recommended separators are single spaces, commas or semicolons. Please make sure that your input file is correctly formatted.

In this example, we have an input file containing 200 examples of nominal data (200 lines), for a 3-axis accelerometer that uses a buffer size of 256 (which gives 256x3 = 768 numerical values per line).

../_images/checks_ok.png

The Check for RAM and the next 5 checks are blocking, meaning that you will need to fix any error in your input file before proceeding further.

../_images/checks_error.png

The last 5 checks are non-blocking. They are just warnings that suggest possible modifications on your input files. Click any warning for more information and advice.

../_images/checks_warning.png

Note

If you imported data live using a data logger through your Serial port, you can download the resulting .csv file by clicking the download icon, at the top-right of the checks.

iv. Data plots

On the right hand side of the screen, you will see a preview of data contained in your input files:

  • For regular signals, in Step 2:

    ../_images/22_screen2_graph.png
  • For abnormal signals, in Step 3:

    ../_images/33_screen3_graph.png

These graphs show a summary of the data contained in each line of your input files. There are as many graphs as sensor axes.

The graph’s x-axis corresponds to the columns of your input file. The y-values contain an indication of the mean value of each column (across all lines, or signals), their min-max values, and their standard deviation.

Note

Here, our accelerometer sampled 256 values per line (per axis), so we see 256 points on the graphs’ x-axis. Those graphs do not represent a temporal evolution of the behavior of your machine as a whole, but rather a snapshot of the actual physical signals, averaged across all lines.

Several input files can be loaded (it will be shown on the left side of the screen, see below), either for “regular” or “abnormal” signals, but only one (for each) at a time will be used for library selection.

../_images/22_screen2_sigfiles.png

3. Running the library selection process

Here (Step 4: Optimize and Benchmark), you will start and monitor the library benchmark. NanoEdge AI Studio will search for the best possible library given the signal files provided in Step 2 and Step 3 (see previous section).

../_images/44_screen4_top_notstarted.png

i. Starting the benchmark

Click START to open the signal selection window:

../_images/55_pre_benchmark.png
Select a couple of signal files (regular + abnormal signals) that you wish to use for benchmark.
Those signals can be compared visually across all sensor axes by clicking Compare these signals.

Important

You can select several couples that will be used to test the performance of all candidate libraries. For example, if you have logged data of similar type on 3 different machines, you should import them and select them here, to add 3 normal/abnormal signal couples, one corresponding to each machine.

../_images/55_select_signals.png ../_images/55_group_signals.png

For more information about adding signal couples (vs. concatenating signals in the same file), check the FAQ, Section I. A. 7. Should I concatenate data into single files, or use “signal couples”?

Then, select the number of microprocessor cores from your computer that you wish to dedicate to the benchmark process (see below). Selecting more CPU cores will parallelize the workload of our algorithms, and greatly speed up the process. Please use as many as you can, but be aware that using all available CPU cores might temporarily slow down your computer’s performances.

When you are ready to start the benchmark, click Validate.

ii. Library performance indicators

NanoEdge AI Studio uses 3 indicators to translate the performance and relevance of candidate libraries, in the following order of priority:

  • Balanced accuracy

  • Confidence

  • RAM

    ../_images/44_screen4_indicators.png
Balanced accuracy:
This is the ability of the library to classify (i.e. correctly identify) regular signals as regular, and abnormal signals as abnormal.
Optimising balanced accuracy is the first priority of our algorithms.
Confidence:
This is the ability of the library to mathematically separate abnormal signals from regular ones.
More precisely, it is the functional margin: the mathematical distance between normal and abnormal signals.
Increasing this functional margin is the algorithms second priority.
RAM:
This is the maximum amount of memory space needed by the library after your integrate it on your microcontroller.
The maximum amount of RAM used is optimised last.

Along with those 3 indicators, a graph shows a plot of all data points, against a percentage of similarity (on the y-axis). Similarity is a measure of the how much a given data point fits in with (how much it resembles) the existing knowledge base of the library.

Regular signals are shown as blue dots, and abnormal signals as red dots. The threshold (decision boundary between the two classes, “nominal” and “anomaly”) set at 90% similarity, is shown as a grey dashed line.

../_images/44_screen4_plotsignalalone.png

Note

  • 100% balanced accuracy would mean that all blue dots are above the 90% threshold, and all red points are below.
  • 100% confidence would mean that all blue dots are at 100% similarity, while all red dots are at 0% similarity.
  • As the benchmark progresses, confidence may decrease slightly, and RAM may vary dramatically, but balanced accuracy will keep improving so that at any time, you always get the most optimal library.

iii. Benchmark progress and summary

As soon as the library selection process is initiated, a graph will be displayed on the right hand side of the screen (see below), showing the evolution of the 3 performance indicators (see above section) over time, as thousands of candidate library are tested.

../_images/44_progress_plot.png

Note

If the benchmark seems stuck at 5%, and nothing is happening within a minute (no plot, or axes with no data points) please stop the benchmark, and start a new benchmark, and if the issue keeps happening, please relaunch the Studio.

The selection algorithms will first try to maximise balanced accuracy, then confidence, and finally to decrease the RAM needed as much as possible.

Warning

The benchmark process may take some time.
Please be patient; have a break, grab a drink.
../_images/44_time_elapsed.png

Only interrupt the benchmark for testing purposes, and don’t expect good results unless all performance markers are at 90% minimum.

Note

Anytime during benchmark, you can test the current library without stopping the benchmark.
While a benchmark is running, just move on to Step 5: Emulator to use an Emulator to test the best Library found so far, or move on to Step 6: Deploy to compile and deploy it [Paid version / Featured boards only].

When the benchmark is complete, the progress graph will be replaced by a summary, which includes a plot of the library’s learning behavior.

../_images/44_screen4_plotiteration.png

This graph shows the number of learning iterations needed to obtain optimal performances from the library, when it is embedded in your final hardware application. In this particular example, NanoEdge AI Studio recommended that the learn() should be called 70 times, at the very minimum.

Warning

  • Never use fewer iterations than the recommended number, but feel free to use more (e.g. 3 to 10 times more).
  • This iteration number corresponds to the number of lines to use in your input file, as a bare minimum.
  • These iterations must include the whole range of all kinds of nominal behaviors that you want to consider on your machine.

Several successive benchmarks can be run; all results will be saved. They can be loaded by clicking them on the left hand side of the screen.

../_images/44_screen4_benchmarks_left.png

4. Testing the NanoEdge AI Library

Here (Step 5: Emulator), you will be able to test the Library that was selected during the benchmark process (Step 4) using NanoEdge AI Emulator.

../_images/5_emulator_top.png

NanoEdge AI Emulator is a command-line tool that emulates the behavior of the associated library. Therefore, each library, among hundreds of thousands of possibilities, comes with its own emulator. Here are the Library functions that are available through NanoEdge AI Emulator for testing:

initialize() run first before learning/detecting, or to reset the knowledge of the library/emulator
set_sensitivity() adjust the pre-set, internal detection sensitivity (does not affect learning, only returned similarity scores)
learn() start a number of learning iterations (to establish an initial knowledge, or enrich an existing one)
detect() start a number detection iterations (inference), once a minimum knowledge base has been established

See the Emulator and Library documentations for more information.

This screen gives a summary of the selected benchmark (progress, performance, files used…). You can also download the Emulator (and its documentation) associated to the selected benchmark, and use it through the command line via your terminal.

../_images/step5_info.png

Important

When building a smart device, the final features will heavily depend on the way those functions are called. It is entirely up to the developer to design relevant learning and detection strategies, depending on the project’s specificities and constraints.

../_images/ild.png

For example for a hypothetical machine, one strategy could be to:

  • initialize the model;
  • establish an initial knowledge base by calling learn() every minute for 24 hours on that machine;
  • switching to inference mode by calling detect() 10 times every hour (and averaging the returned scores), each day;
  • blink a LED and ring alarms whenever detect() returns any anomaly (average score < 90%);
  • run another learning cycle to enrich the existing knowledge, if temperature rises above 60°C (and the machine is still OK)
  • send a daily report (average number of anomalies per hour, with date, time, machine ID…) using Bluetooth or LoRa.
In summary, those smart functions can be triggered by external data (e.g. from sensors, buttons, to account for and adapt to environment changes).
The scores returned by the smart functions can trigger all kinds of behaviors on your device.
The possibilities are endless.

i. Initialization

Select the benchmark to use, to load the associated emulator.

../_images/select_benchmark.png

When you are ready to start testing, click Initialize Emulator, and you will proceed as follows:

../_images/step5_functions.png

You can click Initialization anytime to reset the Emulator and wipe all knowledge.

../_images/step5_reset.png

The Emulator functions outputs will be displayed on the right side of the screen:

../_images/step5_output.png

ii. Learning

After initialization, no knowledge base exists yet. It needs to be acquired in-situ, using real signals. Your Library won’t be pre-trained with the signals imported before benchmark, in Steps 2 and Step 3. Therefore, you need to learn some signals.

Warning

A learning phase corresponds to several iterations of the learn() function. You should use at least the minimum number of iterations recommended in the benchmark summary from Step 4. This learning will be incremental and unsupervised.

To learn some signals from a file, click Select file and open the file containing your training data.

../_images/step5_learn_file.png

To learn some signals “live” from your Serial port, using your own data logger, click Serial data. Then, select your Serial / COM port (refresh if needed), choose your preferred baudrate, and Start recording by clicking the red button.

../_images/step5_learn_serial.png

As soon as some signals are learned, the number of learned signals will be indicated.

../_images/step5_goto_detection.png

Click Go to detection after all relevant signals (nominal, by definition) have been learned.

iii. Detection

When a first knowledge base has been established, you can use Detection using any signals, to check if they would be classified as nominal or anomaly by the Library, and make sure this Library performs as intended.

../_images/step5_detect_file.png
As usual, the signals to use for detection can be imported from file, or from Serial port using a data logger.
Select the signals that you wish to use, and adjust the sensitivity if needed. A pie chart will summarize the detection results.
../_images/step5_detect_results.png

When detecting using live data from the Serial port, a graph will show how the detection performance (similarity percentage) evolves in real time.

../_images/step5_detect_serial.png

Note

Feel free to repeat as many times as needed, adjusting the sensitivity or running additional Learning cycles in the process.

  • If the results obtained are satisfactory, move on to the next step, and Deploy your library on your microcontroller.
  • Otherwise, it is time to review your data logging procedure (sampling frequency, buffer size, signal length…), import other sets of signals, and start a new benchmark.

You will probably not land your ideal library the first time. Using NanoEdge AI Studio is an iterative process. Try, learn, adjust, and repeat!

Important

Possible causes of poor results:

  • The data used for library selection (benchmark) are not coherent with the ones you’re using for testing via Emulator/Library. The regular and abnormal signals imported in the Studio should correspond to the same machine behaviors, regimes, and physical phenomena as the ones used for testing.
  • Your machine learning model is not rich enough. Don’t hesitate to run several learning cycles, as long as they all use nominal data as input (only normal, expected behavior should be learned).
  • Your balanced accuracy and confidence scores were below 90%.
  • You used an insufficient number or signals in either regular and abnormal signals files. Make sure that you used more lines than the minimum recommended by the Studio (possibly 3 to 10 times more, realistically never less than 20-50).
  • The sampling method is inadequate for the physical phenomena studied, in terms of frequency, buffer size, duration, etc.

5. Downloading the NanoEdge AI Library

This feature is only available:

  • in the Trial version of NanoEdge AI Studio, limited to the featured boards which can be selected during project creation;
  • in the Paid version of NanoEdge AI Studio.

In this step (Step 6: Deploy), the library will be compiled and downloaded, ready to be used on your microcontroller for your embedded application.

../_images/6_deploy_top.png

Before compiling the library, several compilation flags are available, both checked by default.

../_images/55_screen5_dev_options.png

On the right hand side of the screen, a code snippet is shown. It aims at providing general guidelines about how your code could be structured, and how the NanoEdge AI Library functions should be used (for more info, see the NanoEdge AI Library and/or NanoEdge AI Emulator documentations).

If you ran several benchmarks, make sure that the correct benchmark is selected. Then, when you are ready to download the NanoEdge AI Library, click Compile.

../_images/compile.png

Select Development version to get a library that is intended for testing and prototyping. If you would like to start producing your device, integrating NanoEdge AI Library, please contact us for more details and to get the proper Library version.

../_images/step6_compile.png

After a short delay, a .zip file will be downloaded to your computer.

../_images/6_zip_file.png

It contains all relevant documentation, the NanoEdge AI Emulator (both Win32 and Unix versions), the NanoEdge AI header file (C and C++), and a .json file containing some library details.

You can also re-download any previously compiled library, via the archived libraries list:

../_images/6_archived_libraries.png

Congratulations; you can now use your NanoEdge AI Library!


Resources

Documentation
All NanoEdge AI Studio documentation is available here.
Tutorials
Step-by-step tutorials, to use NanoEdge AI Studio to build a smart device from A to Z:

Useful links: