This tutorial will show you how to get started with the HUSKYLENS 2. The HUSKYLENS 2 from DFRobot is a AI vision sensor with a replaceable 2MP camera, a 2.4-inch IPS touchscreen, microphone, speaker, and indicator lights.
Out of the box, the HUSKYLENS 2 supports over 20 built-in AI models, ranging from object recognition and face tracking to pose estimation and instance segmentation. In addition, you can deploy custom-trained models via a YOLO-style workflow onto the device.
You will learn how to connect the HUSKYLENS to an Arduino or ESP32 via I2C and how to programmatical retrieve detection results for different AI algorithms. This allows you to control external devices from your Arduino or ESP32 based on detections.
For instance, in this tutorial we will build an Emotion Traffic Light that switches on an LED (red, yellow, green), depending on the emotion detected in a face (Anger, Neutral, Happy).
Let’s get started!
Required Parts
You can get the HUSKYLENS 2 from DFRobot using the link below. Furthermore, you will need a microcontroller. I am using an Arduino UNO and an Lolin ESP32 lite but most other Arduino or ESP32 boards will work fine as well. The only requirement is support for an I2C (or UART) interface.

HUSKYLENS 2

Arduino Uno

USB Cable for Arduino UNO

ESP32 lite

USB Data Cable

Dupont Wire Set

Breadboard
Makerguides is a participant in affiliate advertising programs designed to provide a means for sites to earn advertising fees by linking to Amazon, AliExpress, Elecrow, and other sites. As an Affiliate we may earn from qualifying purchases.
HUSKYLENS versus new HUSKYLENS 2
To avoid any confusion, let’s begin with a quick comparison of the original HUSKYLENS (Version 1) and the new HUSKYLENS 2 that we are using in this tutorial. Both devices are AI vision sensors designed by DFRobot to simplify computer vision applications in embedded systems. Both offer onboard vision processing and serial interfaces but the HUSKYLENS 2 has improved hardware and software capabilities.
HUSKYLENS
The first-generation HUSKYLENS is based on the Kendryte K210 AI processor and provides built-in AI algorithms for tasks such as face recognition, object tracking, color detection, line tracking, and tag recognition. It includes a 2.0-inch IPS display for real-time feedback and supports UART, I²C, and USB interfaces for communication.
HUSKYLENS 2
The HUSKYLENS 2, on the other hand, is powered by a 1.6 GHz dual-core processor (K230) with a 6 TOPS AI accelerator, 1 GB of LPDDR4 RAM, and 8 GB of onboard storage. This enhanced processing capability allows it to perform more complex AI tasks locally.
It comes with over twenty built-in vision models, including object detection, pose estimation, and instance segmentation, and it allows users to deploy their own custom models using a YOLO-based workflow.
The new version also features a higher-resolution 2.4-inch IPS display, a modular camera system that supports interchangeable lenses, a USB-C port for data and power, and an optional wireless connectivity module.
Comparison Table
| Feature | HUSKYLENS (Original) | HUSKYLENS 2 |
|---|---|---|
| Processor | Kendryte K210 dual-core AI chip | Dual-core 1.6 GHz processor with 6 TOPS AI accelerator |
| Memory / Storage | Not specified | 1 GB LPDDR4 RAM + 8 GB eMMC |
| Built-in Models | 7 predefined algorithms (face, object, line, color, tag recognition) | 20+ built-in models with support for custom YOLO models |
| Display | 2.0″ IPS (320×240 px) | 2.4″ IPS (640×480 px) |
| Camera | Fixed 2 MP (OV2640) | 2 MP (GC2093) with interchangeable lenses |
| Interfaces | UART, I²C, USB | USB-C, I²C, UART, optional Wi-Fi module |
| Power Consumption | 230mA @ 5.0V (Face Recognition) | 340mA @ 5V (Face Recognition) |
As mentioned, in this tutorial we are going to use the HUSKYLENS 2 and in the next section we will have a more detailed looked at its technical features.
Hardware of the HUSKYLENS 2
The HUSKYLENS 2 is built around a high-performance embedded AI vision module designed to execute neural network inference entirely on-device, reducing the need for a separate host processor or cloud-based computation. At its core is a dual-core 1.6 GHz CPU (K230) paired with an AI accelerator capable of ≈ 6 TOPS (tera-operations per second) of AI compute performance.
The picture below shows the back of the HUSKYLENS 2 with the camera, two LEDs for illumination, an RGB led and next to it a microphone and finally a small push button for programming/learning:

Memory
Complementing the processor is a memory subsystem consisting of 1 GB of LPDDR4 RAM for neural network and application runtime tasks, and an 8 GB eMMC flash storage for system firmware, model storage, and user-data.
Camera
The image acquisition chain uses a 2 megapixel sensor (model GC2093, 1/2.9″ format) which is capable of capturing video at up to 60 frames per second (fps). The camera module is designed to be modular/interchangeable, enabling different lenses or optical configurations (for example macro, night-vision, long-range) to be swapped in depending on the use-case.

Touchscreen
For human-machine interaction and local UI, HUSKYLENS 2 integrates a 2.4″ IPS touchscreen (resolution 640×480). A single function button, one RGB indicator LED and a small speaker on the back provide additional audio/visual feedback.

Interfaces
The HUSKYLENS 2 offers a USB-C port (for power and firmware updates), a 4-pin “Gravity” connector exposing UART and I²C (and Power/GND) for host communication, and provision for a slot-in 2.4 GHz Wi-Fi 6 (optional) module to enable wireless connectivity. Expandability is also supported via a TF-card (micro-SD) slot on the side for additional storage or dataset capture.

The vision subsystem provides coordinate data, bounding-boxes, IDs and model-specific metadata over UART/I²C for external microcontrollers to read and act on.
Power
Power input is nominally 3.3V to 5.0V (regulated on-board) and typical power consumption is around 1.5W to 3W depending on load and active model usage. The table below shows the currents, I measured for some of the models, with a current of up to 420mA for OCR (Optical Character Recognition) and an idle current (only UI is running) of 250mA:
| Task | Current |
|---|---|
| UI | 250mA |
| Face Recognition | 340mA |
| Object Recognition | 380mA |
| Object Tracking | 370mA |
| Color Recognition | 330mA |
| Object Classification | 350mA |
| Instance Segmentation | 390mA |
| Hand Recognition | 370mA |
| QR Code Recognition | 410mA |
| OCR | 420mA |
Built-in AI Models
The firmware of HUSKYLENS 2 manages both the system-RTOS and the AI model deployment environment. The device comes preloaded with 20+ built-in AI models (such as object detection, face recognition, pose estimation, instance segmentation) which can be selected by the onboard UI or progmatically:

Firmware updates are deployed via the USB-C port (or via the host interface) and the system supports multiple models running serially or in parallel (depending on resource usage) thanks to the 6 TOPS accelerator.
Custom-trained Models
In addition to the built-in models, HUSKYLENS 2 supports the deployment of custom-trained models, specifically via a YOLO-style workflow: users can annotate datasets, train models externally, convert them to the target format, and load them into the device’s eMMC storage and run them on-device.
Model Context Protocol
A distinctive feature is the built-in “Model Context Protocol” (MCP) service, which allows the camera module to output structured semantic data (for example: “person A lifting object B”) to a connected large-language-model (LLM) or host application, thereby bridging on-device vision processing with higher-level reasoning.
Technical Specification
The following table summarizes the technical specification of the HUSKYLENS 2:
| Parameter | Specification |
|---|---|
| Processor core | Dual-core CPU @1.6 GHz (Kendryte K230) |
| AI accelerator | ~6 TOPS on-device AI compute |
| RAM | 1 GB LPDDR4 |
| Storage | 8 GB eMMC |
| Image sensor | GC2093, 2 MP, 1/2.9″, up to 60 fps |
| On-board display | 2.4″ IPS touchscreen, 640×480 resolution |
| Interfaces | USB-C (power/data), 4-pin Gravity (UART/I²C/Power/GND), optional WiFi module |
| Expandable storage | TF (micro-SD) card slot |
| Audio I/O | Built-in microphone, 1 W speaker |
| Indicator / UI | 1 function button, 2 LEDs for illumination, 1 RGB LED |
| Modular camera support | Interchangeable lens modules (macro, night-vision, etc) |
| Input voltage | 3.3 V to 5.0 V |
| Typical power consumption | ~1.5 W to 3 W |
| Dimensions | ~70 × 58 × 19 mm |
| Weight | ~90 g |
| Preloaded models | 20+ built-in AI models |
| Custom model support | via YOLO-style workflow |
| Special features | MCP service linking vision to LLMs |
Connecting HUSKYLENS 2 to Arduino UNO
You can communicate with the HUSKYLENS using the UART or the I2C protocol. I2C is faster and allows you to connect multiple devices to the same bus. We are therefore going use I2C. The Gravity connector of the HUSKYLENS exposes the I2C interface (SDA, SCL) and power supply pins (VCC, GND). See the photo below:

You could connect the HUSKYLENS 2 directly to an Arduino and power it from them 5V output pin of the Arduino but you should NOT do this!
The maximum current the 5V pin of the Arduino can provide is 500mA and the HUSKYLENS 2 consumes up to 420mA (OCR model). This is lower than the maximum current but for longer running times the voltage regulator on the Arduino or ESP32 will get very hot and may burn out.
The save option is to use the little power supply adapter board that comes with the HUSKYLENS. It allows you to power the HUSKYLENS from a separate power supply.
Alternatively, you can connect the HUSKYLENS to a USB port and the Arduino to another USB port as shown in the DFRobot Wiki:

However, I prefer to use the power adapter board and the following section shows you how to wire that up.
Wiring Diagram
The wiring diagram below shows you how to connect the HUSKYLENS via the power adapter board to an Arduino UNO:

Start by connecting the adapter board to the HUSKYLENS. Use the white Dual-Plug PH2.0-4P Silicone Cable (gray wires in diagram) that comes with the HUSKYLENS and make sure that you are using the connector labeled “Huskylens” and “I2C/UART“:

Next connect the colored Gravity-4P Sensor Connector Cable to the power adapter and the Arduino:

The red wire needs to be connected to the 5V pin of the Arduino, and the black wire to GND. The green wire is SDA and should be connected to A4 and the blue wire (SCL) should be connected to A5 of the Arduino.
Connect a power bank or other 5V power supply via an USB cable to the adapter board. This will provide the power for the HUSKYLENS. The HUSKYLENS should run, once USB power is supplied.

Finally, we need to connect our Arduino via its USB cable to a PC that runs the Arduino IDE to be able to program it:

Installing the HuskylensV2 Library
Before you can run any of the following code examples on the Arduino or ESP32, you first will need to install the DFRobot_HuskylensV2 library. Go to the Github repo for the DFRobot_HuskylensV2 library, click on the green Code button and then “Download ZIP” to download the library as a ZIP file:

Next open your Arduino IDE, click on “Sketch” -> “Include Library” -> “Add .ZIP Library …” to add the DFRobot_HuskylensV2 library you just downloaded to the Arduino IDE:

Now we are ready to write some code.
Code Example: I2C Communication with Models
In this first example we are going to test the I2C communication between some of the AI models on the HUSKYLENS device and the Arduino UNO.
Connect your Arduino UNO to a PC running the Arduino IDE. Make sure that the Arduino is recognized on a COM port and that Arduino UNO is selected as board:

Next create a new Sketch and copy&paste the following code into it. This code establishes an I2C communication between the Arduino and HUSKYLENS, and prints the detection results of the AI model currently running on the HUSKYLENS:
// (c) www.makerguides.com
#include "DFRobot_HuskylensV2.h"
HuskylensV2 huskylens;
void setup() {
Serial.begin(115200);
Wire.begin();
while (!huskylens.begin(Wire)) {
Serial.println(F("Can't init HUSKYLENS!"));
delay(100);
}
Serial.println("running...");
}
void loop() {
while (!huskylens.getResult(ALGORITHM_ANY)) {
delay(100);
}
Serial.println("\nRESULTS:");
while (huskylens.available(ALGORITHM_ANY)) {
Result *r= static_cast<Result *>(huskylens.popCachedResult(ALGORITHM_ANY));
Serial.print("Name=");
Serial.print(r->name);
Serial.print(" ID=");
Serial.println(r->ID);
}
delay(1000);
}
Libraries and Objects
The code first includes the DFRobot_HuskylensV2 library and creates the HuskylensV2 object.
#include "DFRobot_HuskylensV2.h" HuskylensV2 huskylens;
Setup
Next, in the setup function we first initial the Serial communication (Serial.begin()) and the I2C interface (Wire.begin()) . Then we try to establish an I2C communication with the HUSKYLENS via huskylens.begin(Wire):
void setup() {
Serial.begin(115200);
Wire.begin();
while (!huskylens.begin(Wire)) {
Serial.println(F("Can't init HUSKYLENS!"));
delay(100);
}
Serial.println("running...");
}
Should this fail and you see “Can’t init HUSKYLENS!” printed on your Serial Monitor check the wiring and make sure that the communication protocol of the HUSKYLENS 2 is set to I2C. For the latter go to “System Settings” -> “Protocol Type” and verify that I2C is selected as shown below:

Loop
In the loop function we first wait if any of the AI algorithms on the HUSKYLENS has any results ready. If that is the case, we iterate over all available results and print the name and ID of the result:
void loop() {
while (!huskylens.getResult(ALGORITHM_ANY)) {
delay(100);
}
Serial.println("\nRESULTS:");
while (huskylens.available(ALGORITHM_ANY)) {
Result *r= static_cast<Result *>(huskylens.popCachedResult(ALGORITHM_ANY));
Serial.print("Name=");
Serial.print(r->name);
Serial.print(" ID=");
Serial.println(r->ID);
}
delay(1000);
}
Select AI Algorithm
Before you can see any results printed on the Serial Monitor, you must first select an AI Algorithm (model) on the HUSKYLENS (later on we will do this automatically from the code). For instance, you can select the Object Recognition algorithm:

The HUSKYLENS will then start to detect objects and will report the results to the Arduino, if there are any objects detected. You should see an output similar to the following on your Serial Monitor:

Note, that you can have multiple detection under one RESULT, since there can be multiple objects within the picture.
You can try other AI Algorithms but apart from an ID most will not provide much useful information with this code example. The results depend on the specific AI Algorithm and require specific code to be printed out. In the next sections you will learn how to retrieve more detailed results.
AI Algorithms
The HUSKYLENS 2 has many built-in AI Algorithms. If you open the Result.h file of the DFRobot_HuskylensV2 library, you will find the following list of constants for the built-in models:
// https://github.com/DFRobot/DFRobot_HuskylensV2/blob/master/Result.h
typedef enum {
ALGORITHM_ANY = 0, // 0
ALGORITHM_FACE_RECOGNITION = 1, // 1
ALGORITHM_OBJECT_TRACKING, // 2
ALGORITHM_OBJECT_RECOGNITION, // 3
ALGORITHM_LINE_TRACKING, // 6
ALGORITHM_COLOR_RECOGNITION, // 5
ALGORITHM_TAG_RECOGNITION, // 6
ALGORITHM_SELF_LEARNING_CLASSIFICATION, // 7
ALGORITHM_OCR_RECOGNITION, // 8
ALGORITHM_LICENSE_RECOGNITION, // 9
ALGORITHM_QRCODE_RECOGNITION, // 10
ALGORITHM_BARCODE_RECOGNITION, // 11
ALGORITHM_EMOTION_RECOGNITION, // 12
ALGORITHM_POSE_RECOGNITION, // 13
ALGORITHM_HAND_RECOGNITION, // 14
ALGORITHM_OBJECT_CLASSIFICATION, // 15
ALGORITHM_BLINK_RECOGNITION, // 16
ALGORITHM_GAZE_RECOGNITION, // 17
ALGORITHM_FACE_ORIENTATION, // 18
ALGORITHM_FALLDOWN_RECOGNITION, // 19
ALGORITHM_SEGMENT, // 20
ALGORITHM_FACE_ACTION_RECOGNITION, // 21
ALGORITHM_CUSTOM0, // 22
ALGORITHM_CUSTOM1, // 23
ALGORITHM_CUSTOM2, // 24
ALGORITHM_BUILTIN_COUNT, // 25
ALGORITHM_CUSTOM_BEGIN = 128, // 128
} eAlgorithm_t;
In the following we will use the Object Recognition, the Face Recognition algorithms and the Emotion Recognition. Once you have same experience with those, writing code for the others is easy.
Code Example: Object Recognition
In this section we will retrieve the detection results from the Object Recognition algorithm. We have used Object Recognition before when testing the I2C interface but only retrieved the name and ID of the detected object. The following code retrieves the name of the object, its ID, its center point and bounding box:
// (c) www.makerguides.com
#include "DFRobot_HuskylensV2.h"
#define TASK ALGORITHM_OBJECT_RECOGNITION
HuskylensV2 huskylens;
void setup() {
Serial.begin(115200);
Wire.begin();
while (!huskylens.begin(Wire)) {
Serial.println(F("Can't init HUSKYLENS!"));
delay(100);
}
huskylens.switchAlgorithm(TASK);
Serial.println("running...");
}
void loop() {
static char text[128];
while (!huskylens.getResult(TASK)) {
delay(100);
}
while (huskylens.available(TASK)) {
Result *r = huskylens.popCachedResult(TASK);
sprintf(text, "%10s (%d) x=%3d y=%3d w=%3d h=%3d",
r->name.c_str(),
r->classID,
r->xCenter,
r->yCenter,
r->width,
r->height);
Serial.println(text);
}
delay(1000);
}
The code is very similar to the previous one, with three important differences. First, we define a constant TASK that specifies the AI Algorithm for which we want to retrieve results.
#define TASK ALGORITHM_OBJECT_RECOGNITION
Second, in the setup function we call huskylens.switchAlgorithm(TASK) to automatically run the AI algorithm we want use:
huskylens.switchAlgorithm(TASK);
Finally, in the loop function, we don’t cast the return type of huskylens.popCachedResult() anymore but just take the Result type as it is.
Result *r = huskylens.popCachedResult(TASK);
Depending on the AI Algorithm the Result object is filled with different detection data. In case of ALGORITHM_OBJECT_RECOGNITION, we can retrieve the name, classID, center point (xCenter, yCenter) and dimensions of the bounding box (width, height):
Result *r = huskylens.popCachedResult(TASK);
sprintf(text, "%10s (%d) x=%3d y=%3d w=%3d h=%3d",
r->name.c_str(),
r->classID,
r->xCenter,
r->yCenter,
r->width,
r->height);
Serial.println(text);
If you upload and run the code on your Arduino the HUSKYLENS should automatically activate Object Recognition algorithm:

and you should see the names and other information of the detected objects printed to the Serial Monitor:

Results and Microprocessor
Note that some of the results not only depend on the AI Algorithm but also on the microprocessor connected to the HUSKYLENS.
For microprocessors with more memory than the Arduino, for instance the ESP32, you will get more detailed results back for some of the algorithms (see Differences in Data Acquisition). You can see this in the Result.h file of the DFRobot_HuskylensV2 library, which has the following definition:
#if defined(ESP32) || defined(NRF5) || defined(ESP8266) #define LARGE_MEMORY 1 #endif
This means ESP32, ESP8266 and NRF5 are recognized as having large memory and Result objects, such as FaceResult with more information are then defined and returned:
#ifdef LARGE_MEMORY
class FaceResult : public Result {
public:
FaceResult(const void *buf);
public:
int16_t leye_x;
int16_t leye_y;
int16_t reye_x;
int16_t reye_y;
int16_t nose_x;
int16_t nose_y;
int16_t lmouth_x;
int16_t lmouth_y;
int16_t rmouth_x;
int16_t rmouth_y;
};
In the next section, we will connect an ESP32 to the HUSKYLENS and retrieve the richer results for the Face Recognition algorithm.
Connecting HUSKYLENS 2 to ESP32
The ESP32 lite, I am using here, has the same maximum output current limitation of 500mA due to the built in ME6211 voltage regulator. We therefore again connect the ESP32 and the HUSKYLENS via the power adapter board to avoid overloading the voltage regulator.
Below is the complete wiring diagram. It is the essentially the same as the one for the Arduino. However, VCC is connected to the 3.3V output of the ESP32 and SCL and SDA are connected to the pins 23 and 19, respectively:

The hardware I2C pins will depend on your ESP32 board. Either consult the datasheet for your ESP32 board or have a look at out Find I2C and SPI default pins tutorial to identify the I2C pins for a different board.
Code Example: Face Recognition
In this code example we are going to retrieve detection results from the Face Recognition algorithm. It returns the coordinates for the left and right eye, the nose and the left and right corners of the mouth. See the white dots in the following picture that show those landmarks:

Note that the following code will only compile for an ESP32, ESP8266 or NRF5 microprocessor but not an Arduino, which is why we connected an ESP32 to the HUSKYLENS in the previous section.
// (c) www.makerguides.com
#include "DFRobot_HuskylensV2.h"
#define TASK ALGORITHM_FACE_RECOGNITION
HuskylensV2 huskylens;
void setup() {
Serial.begin(115200);
Wire.begin();
while (!huskylens.begin(Wire)) {
Serial.println(F("Can't init HUSKYLENS!"));
delay(100);
}
huskylens.switchAlgorithm(TASK);
Serial.println("running...");
}
void loop() {
static char text[128];
while (!huskylens.getResult(TASK)) {
delay(100);
}
while (huskylens.available(TASK)) {
FaceResult *r = static_cast<FaceResult *>(huskylens.popCachedResult(TASK));
sprintf(text, "%3d [%3d %3d %3d %3d %3d %3d %3d %3d %3d %3d]",
r->classID,
r->leye_x,
r->leye_y,
r->reye_x,
r->reye_y,
r->nose_x,
r->nose_y,
r->lmouth_x,
r->lmouth_y,
r->rmouth_x,
r->rmouth_y);
Serial.println(text);
}
delay(1000);
}
Constants and Objects
We start by defining a constant TASK for the AI Algorithm, the ALGORITHM_FACE_RECOGNITION. And then we create the HuskylensV2 object as usual:
#define TASK ALGORITHM_FACE_RECOGNITION HuskylensV2 huskylens;
Setup
The setup function also remains as usual. We initiate the Serial communication, and then try to connect to the HUSKYLENS. Should this fail and you see “Can’t init HUSKYLENS!” printed on your Serial Monitor check the wiring!
void setup() {
Serial.begin(115200);
Wire.begin();
while (!huskylens.begin(Wire)) {
Serial.println(F("Can't init HUSKYLENS!"));
delay(100);
}
huskylens.switchAlgorithm(TASK);
Serial.println("running...");
}
Otherwise, the AI algorithm ALGORITHM_FACE_RECOGNITION on the HUSKELENS is activated via huskylens.switchAlgorithm(TASK) and we are ready to detect faces.
Loop
There is an important change to the loop function, however. We are casting the result returned by the huskylens.popCachedResult() function to the FaceResult type. This type contains the eye, nose and mouth coordinates for the detected face, which we then print out:
FaceResult *r = static_cast<FaceResult *>(huskylens.popCachedResult(TASK));
sprintf(text, "%3d [%3d %3d %3d %3d %3d %3d %3d %3d %3d %3d]",
r->classID,
r->leye_x,
r->leye_y,
r->reye_x,
r->reye_y,
r->nose_x,
r->nose_y,
r->lmouth_x,
r->lmouth_y,
r->rmouth_x,
r->rmouth_y);
Serial.println(text);
If you upload this code to your ESP32 you should see the following output on your Serial Monitor, if faces are detected:

Note, that you can run similar code on the Arduino UNO but you will get only the center point and bounding box back, due to the smaller memory of the Arduino, e.g. you could change the loop function as follows:
Result *r = static_cast<Result *>(huskylens.popCachedResult(TASK));
sprintf(text, "%3d [%3d %3d %3d %3d]",
r->classID,
r->xCenter,
r->yCenter,
r->width,
r->height);
Serial.println(text);
Code Example: Emotion Traffic Light
In this last example, we will build an Emotion Traffic Light. It uses the Face Emotion Recognition algorithm of the HUSKYLENS to detect emotions like “Anger”, “Neutral” or “Happiness” in faces and we use this information to switch on a red, yellow or green LED.
I am going to use an Arduino here, but an ESP32 would work as well. First, we need to connect the LEDs. The following diagram shows how to connect them to the Arduino UNO:

I connected the red LED to pin 11, the yellow LED to pin 10 and the green LED to pin 9. Don’t forget the 220 Ohm or similar resistor to limit the current through the LEDs. The photo below shows the wiring on the breadboard:

Now we are ready to write the code for our Emotion Traffic Light. It adds functions to control the LEDs and extends the loop function to switch on LEDs depending on the result (emotion) returned by the Face Emotion Recognition algorithm:
// (c) www.makerguides.com
#include "DFRobot_HuskylensV2.h"
#define RED_LED 11
#define YELLOW_LED 10
#define GREEN_LED 9
#define TASK ALGORITHM_EMOTION_RECOGNITION
HuskylensV2 huskylens;
void initLEDs() {
pinMode(RED_LED, OUTPUT);
pinMode(YELLOW_LED, OUTPUT);
pinMode(GREEN_LED, OUTPUT);
switchOffLEDs();
}
void switchOffLEDs() {
digitalWrite(RED_LED, LOW);
digitalWrite(YELLOW_LED, LOW);
digitalWrite(GREEN_LED, LOW);
}
void switchOnLED(int led) {
digitalWrite(led, HIGH);
}
void setup() {
Serial.begin(115200);
initLEDs();
Wire.begin();
while (!huskylens.begin(Wire)) {
Serial.println(F("Can't init HUSKYLENS!"));
delay(100);
}
huskylens.switchAlgorithm(TASK);
Serial.println("running...");
}
void loop() {
while (!huskylens.getResult(TASK)) {
delay(100);
}
while (huskylens.available(TASK)) {
Result *r = static_cast<Result *>(huskylens.popCachedResult(TASK));
Serial.println(r->name);
switchOffLEDs();
if (r->name == "Happiness") {
switchOnLED(GREEN_LED);
}
if (r->name == "Neutral") {
switchOnLED(YELLOW_LED);
}
if (r->name == "Anger") {
switchOnLED(RED_LED);
}
}
delay(1000);
}
Definitions
We start by defining the pins for the LEDs and the TASK as ALGORITHM_EMOTION_RECOGNITION :
#define RED_LED 11 #define YELLOW_LED 10 #define GREEN_LED 9 #define TASK ALGORITHM_EMOTION_RECOGNITION
LED functions
Next we implement some functions to initialize and control the three LEDs:
void initLEDs() {
pinMode(RED_LED, OUTPUT);
pinMode(YELLOW_LED, OUTPUT);
pinMode(GREEN_LED, OUTPUT);
switchOffLEDs();
}
void switchOffLEDs() {
digitalWrite(RED_LED, LOW);
digitalWrite(YELLOW_LED, LOW);
digitalWrite(GREEN_LED, LOW);
}
void switchOnLED(int led) {
digitalWrite(led, HIGH);
}
Setup
In the setup function we initialize serial and I2C communication and the LEDs. Then we connect to the HUSKYLENS via huskylens.begin(Wire) and start the AI algorithm as usual via huskylens.switchAlgorithm(TASK):
void setup() {
Serial.begin(115200);
initLEDs();
Wire.begin();
while (!huskylens.begin(Wire)) {
Serial.println(F("Can't init HUSKYLENS!"));
delay(100);
}
huskylens.switchAlgorithm(TASK);
Serial.println("running...");
}
Loop
Finally, we have the loop function, where we retrieve the Emotion detection result and depending on the detected emotion switch on the red, yellow or green LED:
Result *r = static_cast<Result *>(huskylens.popCachedResult(TASK));
Serial.println(r->name);
switchOffLEDs();
if (r->name == "Happiness") {
switchOnLED(GREEN_LED);
}
if (r->name == "Neutral") {
switchOnLED(YELLOW_LED);
}
if (r->name == "Anger") {
switchOnLED(RED_LED);
}
}
Note that beyond “Happiness”, “Anger” and “Neutral” there are other emotions such as “Fear”, “Disgust”, “Sad” and “Surprised”, which the current code does not react to. But you could easily extend it to those emotions as well.
If you run the code on your Arduino, the HUSKYLENS should activate the Face Emotion Recognition algorithm:

and on the Serial Monitor you should see output similar to the following. Also the corresponding LEDs for the detected emotions should light up:

And that’s it! The code examples and wiring diagrams above should make it is for you to get started with the HUSKYLENS 2.
Conclusions
This tutorial showed you how get started with the HUSKYLENS 2 AI Vison sensor. You learned how to connect it to an Arduino or ESP32 and how to retrieve detection results for the various AI algorithms built into the HUSKYLENS 2. I recommend you to also read the Tutorial for HUSKYLENS 2 and Arduino Code Programming by DFRobot.
The HUSKYLENS 2 makes it extremely easy to become familiar with various AI applications such as Object and Face recognition, Hand gesture and Pose recognition, OCR and many others. You can easily train/adjust some of AI algorithms and can even download your own custom AI models. For more details have a look at DFRobot’s Wiki for the HUSKYLENS 2.
The biggest advantage of an AI sensor such as the HUSKYLENS 2 is, that you can run AI models locally on the device. No need for a Wi-Fi connection to a cloud service with potentially high latencies or connection issues.
The disadvantages are a potentially higher power consumption and a lower accuracy of the models. I measured a current of up to 420mA for the Optical Character Recognition (OCR) model, which seemed to be the model with the highest power consumption.
The accuracy of the models varies. I found the Face Emotion recognition to work very well, while the Object Recognition model produced many misclassifications. You probably want to use your own custom model with a smaller number of classes for object recognition tasks or try the Self-Learning Classifier function.
If you have any questions feel free to leave them in the comment section.
Happy Tinkering 😉
Stefan is a professional software developer and researcher. He has worked in robotics, bioinformatics, image/audio processing and education at Siemens, IBM and Google. He specializes in AI and machine learning and has a keen interest in DIY projects involving Arduino and 3D printing.

