Hello, I’m Kishimoto!
I’m an Edge Engineer intern who has been working at Hacarus for quite some time now.
Last year, Sony released an Arduino IDE supported, high-performance board computer called, SPRESENSE, which I’ll be talking about today!
SPRESENSE is an Arduino-compatible edge device loaded with a high-performance CPU. Using the SPRESENSE, I wanted to test its capabilities for edge machine learning, which is often difficult to achieve due to the low computing power of edge devices.
In recent years, the inference device accelerator on Edge has become popular. However, I do not see much edge machine learning being conducted. Therefore, I decided to try Sparse modeling machine learning, Hacarus’s forte, on the SPRESENSE.
1. SPRESENSE’s Development Environment
SPRESENSE is a high-performance Arduino compatible board developed by Sony and equipped with the CXD5602 sensing processor, also developed by Sony. Below are listed a few of its strengths:
- Low power consumption
- High computing ability
- GPS positioning function
- High resolution voice input / output
Along with being built in to the Xperia Ear Duo, it also seems to be using machine learning to recognize gestures like the swinging or oscillation of an object. That being said, I investigate the following two points:
- if it can be developed with Arduino IDE because it is compatible with Arduino, and the learning cost and introduction threshold are low.
- if it has high-performance computing ability and can learn on the edge side.
Arduino can freely release compatible products as long as the license is developed with open source developed hardware. Also, Arduino IDE, which is an integrated development environment of Arduino, is also open source and can be used freely.
1.1 Setting up the Development Environment
- Windows 10
- ArduinoIDE 18.104.22.168
See here for Arduino IDE settings, boot loader writing, etc.
2. Implementation of ADMM
One of Lasso’s algorithm implementations is known as ADMM (alternating direction method of multipliers) uses the algorithm shown in the image below. By optimizing the L1 norm, the weights of the resulting linear regression model will be sparse. All the code implemented this time is released on Hacarus’s Github. Please refer to the ADMM optimization that we created within the admm class in a file named lasso_train.h.
In order to simplify matrix calculations, we created a class for matrix calculations with the name ndarray. Implemented matrix generation, arithmetic operations, matrix multiplication, inverse matrix calculation, etc.
You can do the following simple matrix operations.
3. Run ADMM
3.1 Data Set Used for Learning
We used a data set of house prices in Boston. The data below numbered 1 to 13, is used by the machine learning model to predict relationship with 14 · MEDV.
- CRIM – per capita crime rate by town
- ZN – proportion of residential land zoned for lots over 25,000 sq.ft.
- INDUS – proportion of non-retail business acres per town.
- CHAS – Charles River dummy variable (1 if tract bounds river; 0 otherwise)
- NOX-nitric oxides concentration (parts per 10 million)
- RM-average number of rooms per dwelling
- AGE – proportion of owner-occupied units built prior to 1940
- DIS – weighted distances to five Boston employment centers
- RAD-index of accessibility to radial highways
- TAX – full-value property-tax rate per $ 10,000
- PTRATIO-pupil-teacher ratio by town
- B-1000 (Bk-0.63) ^ 2 where Bk is the proportion of blacks by town
- LSTAT-% lower status of the population
- MEDV-Median value of owner-occupied homes in $ 1000’s
3.2 Reading a Data Set
Open the serial monitor in Arduino Editor’s Tools> Serial Monitor. Input data size and data set from the serial monitor and send to SPRESENSE by serial communication.
3.3 Results of the Machine Learning
After data is input and machine learning is conducted, the sparse vector learned by serial communication is sent back to the PC and displayed. The result is the image below. From the displayed vectors, you can see that the learned vector is sparse.
14: MEDV (median price of house) has a positive weight of 6: RM (number of rooms per house), 13: LSTAT (population of people with low status) and 11: PTRATIO (number of students per teacher) The result is a negative weight. These results were produced in a mere 0.16 seconds, so it is clear that the sparse modeling machine learning method is extremely fast.
4. Implementation of Fused Lasso
Fused Lasso can suppress the change between adjacent elements by setting penalty terms using the difference between the adjacent elements. The implementation code can be found by looking at the fused_lasso class, but the original admm can be implemented by slightly extending it. There is an example in the open source library spm-image maintained by Hacarus about the extension method of generalized Lasso, and there is also a presentation by Sparse Modeling Evangelist Masui – if you are keen to learn more, I recommend checking it out.
4.1. Data Used to Conduct Machine Learning
Here, we have prepared a task to put noise on a simple square wave and remove noise from it.
4.2. Results of the Machine Learning
The training results are displayed in matplotlib as shown below. A noise sample was placed on a 100-dimensional stepped rectangular wave vector (blue) (orange) and de-noised (green) with Fused Lasso.
In conclusion, I implemented Lasso and its derivative Fused Lasso on the SPRESENSE to test its compatibility. Results indicate that SPRESENSE is able to operate at a high speed despite low power consumption, and it is clear that it is possible for embedded devices to sufficiently execute linear regression models using the techniques described.
As SPRESENSE is compatible with Arduino, it was easy to pick-up and get started – and the program could be written without needing to learn a new environment.
While I did not use Arduino IDE this time, the built-in GPS and DAC capabilities and the possibility to connect camera and/or Wi-fi to the device makes me keen on exploring further projects with the SPRESENSE in the future.