_images/cartesiam_logo.png

Frequently Asked Questions

_images/banner.png

I. NanoEdge AI Studio

A. Input data and formatting

1. How should my input data be structured?

Input files contain m lines each composed of n numerical values per line, which are separated by a uniform separator. Each line represents an independent signal example. The number of values per line is specific to each project, but always depends on the number of axes of the sensor used, and its buffer size (number of samples per axis).

Mono-sensor

Here is an example for a 3-axis sensor. We have a collection on m signals (lines) with a buffer size of 256 on each axis, which results in 256x3 = 768 values per line.

_images/input_example.png

Warning

We have NUMBER_OF_AXES * BUFFER_LENGTH values per line. A line represents a signal snapshot consisting of several samples.

Multi-sensor

Here is an example for a 3-axis accelerometer, 3-axis magnetometer, 3-axis gyroscope coupled with a temperature sensor (10 variables in total).

_images/input_example_multi.png

Warning

We only have NUMBER_OF_VARIABLES values per line (effectively, a buffer length of 1). A line represents a single sample of the signal, not a full signal snapshot anymore.

Note

If your input files contain formatting errors, or suspicious data, you will see warnings or errors in NanoEdge AI Studio. You can click on those warnings/errors for more information and advice.

_images/checks.png

For more information, see Studio: input file format.


2. How much data should my “regular signals” file contain?

The “regular signals” input file contains lines, that each corresponds to data that is considered normal, or nominal.

Each line represents an independent signal example. Those lines correspond to learning iterations. These learning examples will be seen one by one by the algorithms, and used to establish a model and enrich its knowledge.

Warning

For best performances, never use too few (< 20-50 lines) or too many (hundreds of thousands, or more).
A realistic range would be anything between a hundred, and a few thousands.

Among all these signals (or lines, or iterations), all nominal regimes should be included.

For example, if I want to detect anomalies in the vibration patterns of a 3-speed fan, I will include vibrational data corresponding to each of the 3 speeds, possibly including transition phases, so that all normal vibration patterns will be represented within the regular signals file. Their position, or their order within the file doesn’t matter as each line is independent.


3. How do I choose the best buffer size / sampling frequency?

Those parameters highly depend on each use case, and each project’s specificities.
Please consider these 3 important parameters:
  • n: buffer size
  • f: sampling frequency
  • L: signal length

They are linked together via: n = f * L. In other words, by choosing two (according to your use case), the third one will be constrained.

Here are general recommendations. Make sure that:

  • the sampling frequency is high enough to catch all desired signal features. To sample a 1000 Hz phenomenon, you must at least double the frequency (i.e. sample at 2000 Hz at least).

  • your signal is long (or short) enough to be coherent with the phenomenon to be sampled. For example, if you want your signals to be 0.25 seconds long (L), you must have n / f = 0.25. For example, choose a buffer size of 256 with a frequency of 1024 Hz, or a buffer of 1024 with a frequency of 4096 Hz, and so on.

    For more information, see Studio: sampling frequency.

Note

For best performances, always use a buffer size n that is a power of two (e.g. 128, 512…).


4. Should I include all possible anomalies in my “abnormal signals”?

Of course not. If would defeat the purpose, and be almost impossible to do in practice.

The “abnormal signals” file doesn’t have to be exhaustive. Just include examples of some anomalies that you’ve already encountered, or that you suspect could happen. If needed, don’t hesitate to create “anomalies” manually.

Note

It is OK to have fewer “abnormal” than “regular” signals, but try to include as many as possible.
Realistically, it could make sense to include any number between a few tens and a few hundreds.

In order for the selected nanoEdge AI Library to detect very “subtle anomalies”, we recommend that the data provided as abnormal signals includes at least some examples of subtle anomalies as well, and not only very gross, obvious ones.


5. If I provide “abnormal signals”, is the learning really unsupervised?

NanoEdge AI Studio’s purpose is to search for the best possible NanoEdge AI Library for your application, among millions of possibilities. The “abnormal signals” are only provided to give some context context, to narrow down the scope of this search.

After the best Library is found, this Library will be able to learn behaviors, specific to your use case, but will have no knowledge yet. All knowledge must be acquired in-situ, by running some learning iterations after the Library is embedded on the microcontroller.

No information is learned regarding anomalies. The Library only learns what is normal, and figures out what is an anomaly by itself. As such, when embedded, the learning is completely unsupervised.


6. Is overlapping supported?

Neither the Studio nor the Library will explicitly take signal overlapping into account, as each line of the input files is independent.

Signal overlapping may still be included if you have good reasons to use it, but be aware that it might hurt performances (e.g. by “diluting” useful signal features).


7. Should I concatenate data into single files, or use “signal couples”?

Let’s consider that you have several data sources, i.e. multiple input files for both “regular signals” and “abnormal signals”.

  1. Should you concatenate all nominal data into the “regular signals” file, and all abnormal data into the “abnormal signals” file?
  2. Or instead, should you keep multiple “regular” and “abnormal” files, and select multiple signal couples via the benchmark window just before starting it?

There are two possibilities:

  1. When considering different regimes/behaviors on a single machine, or several anomaly types on a single machine, then you should CONCATENATE.
  2. When considering different machines that all have the same behavior (the data will slightly differ, because the two machine can never be 100% identical), then you should ADD SIGNAL COUPLES.

Note

In case you are not sure which option to choose, go with option 1: concatenate all regular data in one file, and all abnormal data in another file.


B. Performance issues

1. How can I improve my benchmark results?

Here are a few things to try:

  • Restart a benchmark using the same data.
  • Increase the “Max RAM” parameter (e.g. 32 kB or more).
  • Change your sampling frequency, or make sure it is coherent with the phenomenon to sample.
  • Change your buffer size (and hence, signal length), or make sure it is coherent with the phenomenon to sample.
  • Make sure that your buffer size is a power of two.
  • If using a multi-axis sensor, treat each axis individually by running several benchmarks with a single-axis sensor.
  • Include more learning examples (lines) in your “regular signals” file.
  • Check the quality of your signals; make sure they contain the relevant features / characteristics.
  • Include more anomalies (possibly more types) in your “abnormal signals” file.
  • Check that the sampling methodology and sensor parameters were the same for both “regular” and “abnormal” signals.
  • Check that your signals are not too noisy, too low intensity, too similar, or unrepeatable.
  • Remember that microcontrollers are resource-constrained (audio/video, image and voice recognition won’t be supported).

If still unable to get good benchmark results, don’t hesitate to contact us at support@cartesiam.com.


2. Why is my benchmark taking so long?

During benchmark, our search algorithms test tens of thousands of libraries, in order to find the best possible candidate among millions of possibilities. Please be patient and let the process finish properly.

You may speed up the process by:

  • increasing the number of CPU cores dedicated to the search (just before clicking “Validate” and starting the benchmark).
  • reducing the amount of data provided in both “regular” and “abnormal” signals files (which may severely hurt performances).

Note

For quick testing porposes it is OK to terminate a benchmark prematurely, and test the performances of the best Library selected (up to this point) using the Emulator. The summary plots and graphs will give you an idea of the performances you can expect (is the data well classified? well separated?)


3. My benchmark is stuck at 5% and nothing is happening.

After starting a benchmark (Step 4: Optimize and Benchmark), the progress bar goes to 5% and some graphs start appearing. If nothing is happening within a minute (no plot, or axes with no data points) please stop the benchmark, and start a new benchmark.

If the same issue keeps happening, please quit and relaunch the Studio. Then, start a new benchmark.

If the issue persists, send us your log files (in Documents/workspaceNanoEdgeAi/logs/) at support@cartesiam.com. We will investigate the issue and come back to you.


C. Errors and bugs

1. Why does the Studio keep asking for my license key?

NanoEdge AI Studio usually asks for your license key after installing an important update, after updating your license, or after some time has passed. NanoEdge AI Studio shouldn’t continually ask for your license key every time you launch it.

However this might happen for some users due to poor network performances, or when rebooting your computer, or when using an older machine, and so on.

Make sure your antivirus or firewall software are not blocking any incoming or outgoing connexion, and that you are not using third party software that may remove some temporary files created by NanoEdge AI Studio.


2. How should I configure my proxy?

If you’re using a proxy, use the following settings:

Licensing API: Cartesiam API for library compilation:
104.31.76.187 40.113.111.93
104.31.77.187  
or via URL: https://api.cryptlex.com:443 or via URL: https://apidev.cartesiam.net

3. My license key is invalid, what can I do?

Double check that you entered your license key properly.

In some cases (poor internet connexion, issues with the licensing platform, firewall or proxy issues) you will have to do an offline activation.
Here are the steps to follow:
  1. Choose Offline activation and enter your license key.
  2. Copy the long string of characters that appears.
  3. Go to the Cryptlex login page (https://cartesiam.cryptlex.app/auth/login).
  4. Reset your password using the email address provided when downloading NanoEdge AI Studio.
  5. Log into your Cryptlex dashboard using your new password.
  6. Click on your license key, then Activations and Offline activation
  7. Click ACTIVATION, then paste the string of characters copied in step 2, and click Download response.
  8. In NanoEdge AI Studio, click Import file and open the downloaded .dat file.

If still unable to activate your license, please contact us at support@cartesiam.com.


4. Why do I keep seeing the “Port XXXX is used” message?

It happens when the port used for communication (between the local Cartesiam API, on your machine, and NanoEdge AI Studio) is used by some other program. It is set to 5000 by default. You may change it to any other port that you know is not currently being used.

Double check that:

  • you have a sufficient amount of resources on your machine for NanoEdge AI Studio to run properly (RAM, CPU cores).
  • the background scanning process of your antivirus or firewall is not interfering with the Studio.
  • you’re not trying to quit and relaunch the Studio too quickly.
  • you’re not running multiple instances of the studio.

5. I see a blank page and nothing happens when I click XXX.

After clicking somewhere, if nothing happens, and the window stays blank for more than one minute, please consider closing and restarting the Studio.

When starting a benchmark, progress graphs and plots should appear within one or two minute. If nothing happens, and the lower half of your screen appears blank, please quit and relaunch the Studio. Then, start a new benchmark.

Note

Don’t hesitate to give feedback and report bugs using the Freshdesk platform, we appreciate it, and it helps us improve our software faster.


II. NanoEdge AI Library / Emulator

1. What is the “recommended number of iterations”?

The minimum iterations is a recommended number that appears at the end of a benchmark. It is a minimum number of iterations that summarizes how fast the Library can learn once embedded.

_images/minimum_iterations.png

It corresponds to the minimum number of times the NanoEdgeAI_learn function has to be called in order for the knowledge acquired by the Library’s model to be rich enough.

Warning

If your use case has several regimes, or modes of nominal behavior, these should all be included within the learning iterations.

For instance, when monitoring vibration patterns on a 3-speed fan, the Studio recommends a minimum of 90 iterations. It means that you must provide at least 90 examples of signals in total, including all 3 speeds (e.g. 30 iterations, or lines, per speed at the very minimum). Of course you could also provide more, e.g. 200 examples per speed (totaling 600 iterations).

Note

This number is a strict minimum. Realistically, don’t hesitate to use more, e.g. 10x this number, if needed.


2. What is the similarity score?

The similarity score is a measure of how mathematically similar a given sample is, compared to the existing knowledge base of the Library.

If a sample “looks” like something seen before, it will have a similarity score near 100 (%), and if it completely new (looks dramatically different), the score will be close to 0.


3. How does sensitivity work?

The NanoEdgeAI_set_sensitivity function changes the sensitivity of the model dynamically at any time, without having to go through a new learning phase. This sensitivity has no influence on the knowledge acquired during the learning steps; it only plays a role in the detection step.

It acts as a linear scaler of the pre-set internal sensitivity (optimized during benchmark in the Studio). It influences the similarity percentages that are returned by the NanoEdgeAI_detect function.

Note

  • The default sensitivity value is 1. A sensitivity value between 0 and 1 (excluded) decreases the sensitivity of the model, while a value in between 1 and 100 increases it. We recommend increasing or decreasing sensitivity by steps of 0.1.
  • Sensitivity 1.1 - 100 will tend to decrease the percentages of similarity returned, while sensitivity 0 - 0.9 will decrease them.

4. Can the Library learn several regimes / machine behaviors?

Yes. If the data contained in the “regular signals” file imported in NanoEdge AI Studio correspond to different regimes, then the Library will be able to detect those different regimes and consider them as “nominal”. This selected Library will include a machine learning model that has the ability to adapt to those differences in machine behavior.

It is also important, before embedding the Library, to properly design learning cycles, so that all the different regimes to be considered as “nominal” are learned. You may run a learning phase for each of those different regimes, or a single learning phase that includes all of them.


5. Why are my detection results so poor?

If the performances of the embedded Library seem poor, there are a few elements to check. Make sure that:

  • your balanced accuracy and confidence scores were above 90% at the end of the benchmark (Studio).
  • the data used for benchmark (Studio) are coherent with the ones you’re passing to the Library’s functions. The regular and abnormal signals imported in the Studio should correspond to the same machine behaviors, regimes, and physical phenomena as the ones used for testing.
  • your machine learning model is rich enough. Don’t hesitate to run several learning cycles, as long as they all use nominal data as input (only normal, expected behavior should be learned).
  • the sampling method is adequate for the physical phenomenon studied, in terms of frequency, buffer size, duration, etc.

In summary, if in the Studio the benchmark results were good, and the selected Library was properly tested and validated using the Emulator, then the embedded Library will give similar performances if used in the same conditions, with data that is coherent with those used for Library selection.


6. How can I use the “learn” and “detect” functions properly?

When building a smart device, the final features will heavily depend on the way those functions are called. It is entirely up to the developer to design relevant learning and detection strategies, depending on the project’s specificities and constraints.

_images/ild.png
The learn and detect functions can be triggered by external data (e.g. from sensors, buttons, to account for and adapt to environment changes).
The scores returned by these functions can trigger all kinds of behaviors on your device.

For example for a hypothetical machine, one strategy could be to:

  • initialize the model;
  • establish an initial knowledge base by calling learn() every minute for 24 hours on that machine;
  • switching to inference mode by calling detect() 10 times every hour (and averaging the returned scores), each day;
  • blink a LED and ring alarms whenever detect() returns any anomaly (average score < 90%);
  • run another learning cycle to enrich the existing knowledge, if temperature rises above 60°C (and the machine is still OK)
  • send a daily report (average number of anomalies per hour, with date, time, machine ID…) using Bluetooth or LoRa.

7. Can I save the Library’s knowledge and use it on another device?

Yes. For more information, see Library: creating backups.

Note

This feature is mainly used to prevent loss of knowledge, e.g. in case of power failure.

Warning

While it is possible to transfer the knowledge acquired on one device, to another device, we strongly discourage it. Indeed, it defeats the purpose of acquiring knowledge in-situ.

The knowledge acquired on one device isn’t necessarily transferable to another device. For example, even a very subtle misplacement of an accelerometer could be enough to kill the Library’s performances, during vibrational analysis, if no additional learning was done in-situ.

If you really intend to transfer some knowledge, then run at least a few learning cycles in-situ on the new device, to enrich the knowledge and adapt it to the new environment.


8. How can I get a Library that is very general (or very specific)?

The library yielded by NanoEdge AI Studio will be the result of the data taken as input. Depending on your use case, it can be either:

  • very specialized, extremely specific to one specific application or machine;
  • or very versatile, adaptable to a wide range of different machines (of the same type).

To get a very specific library, your imported data (Studio: Steps 2 and 3) must be focused and have low variance. It should contain many examples of the same signals, which must be as repeatable as possible.

To get a general and adaptable library, your imported data must contain a wide range of signals, that represent different behaviors, possibly on logged using several kinds machines (e.g. industrial vibrating machines used for different tasks).


9. How can I open Serial / COM / USB ports on Linux?

  1. Install the following package by entering:

    sudo apt-get install libgconf-2-4
    
  2. Add yourself to the “dialout” group:

    sudo usermod -a -G dialout $USER
    
  3. Create a rule file to automatically enable permissions when a device is connected:

    sudo nano /etc/udev/rules.d/50-local.rules
    
  4. Paste this line in the file you just created (where /dev/ttyACM0 targets your USB port).

    ACTION=="add", ATTRS{idProduct}=="0042", ATTRS{idVendor}=="2341", DRIVERS=="usb", RUN+="chmod a+rw /dev/ttyACM0"
    
  5. Press Control-O and then Control-X to save the file and exit.

Note

You may need to log out and log back in (or reboot your computer) for the changes to take effect.


Resources

Documentation
All NanoEdge AI Studio documentation is available here.
Tutorials
Step-by-step tutorials, to use NanoEdge AI Studio to build a smart device from A to Z:

Useful links: