The HUSKYLENS is a compact, edge-AI vision sensor. Powered by a Kendryte K210 AI chip and equipped with a small 2.0-inch IPS display, the device supports built-in recognition and tracking function such as face recognition, object recognition and tracking, color and line detection, tag recognition and object classification.
New object or patterns can easily be learned and information about detected objects can transferred via UART or I2C interfaces to common microcontrollers like Arduino, ESP32 or Raspberry Pi.
In this tutorial you will learn how to connect the HUSKYLENS to an Arduino UNO and retrieve detection results for line tracking and object classification.
Required Parts
You can get the HUSKYLENS from DFRobot or Amazon using the links below. Furthermore, you will need a microcontroller. I am using an Arduino UNO but other Arduino boards will work fine as well. The only requirement is support for an I2C (or UART) interface.

HUSKYLENS

Arduino Uno

USB Cable for Arduino UNO

Dupont Wire Set

Breadboard
Makerguides is a participant in affiliate advertising programs designed to provide a means for sites to earn advertising fees by linking to Amazon, AliExpress, Elecrow, and other sites. As an Affiliate we may earn from qualifying purchases.
Hardware of the HUSKYLENS
At the core of the HUSKYLENS module is a Kendryte K210 system-on-chip, which integrates a dual-core 64-bit RISC-V CPU complex with dedicated hardware accelerators for convolutional neural networks and signal processing.
This architecture allows the device to run multiple classical and deep-learning–based vision algorithms entirely on the edge, without relying on an external host for inference. The K210 includes several megabytes of on-chip SRAM used as frame buffers and intermediate feature-map storage, which allows real-time processing and keeps external memory bandwidth and latency low.

AI Models
The firmware running on the K210 exposes seven primary machine-vision functions: face recognition, object tracking, object recognition, line tracking, color recognition, tag recognition and object classification.
These functions share a common inference pipeline but are differentiated by their pre-processing, model topology, and post-processing stages. The result is a set of optimized, pre-tuned models that can be switched at runtime via the onboard user interface or via serial commands, without requiring the user to manage or deploy models manually.
Image Sensor and Display Subsystem
HUSKYLENS uses a digital image sensor connected to the K210 through a DVP-style camera interface. Depending on the production batch, the camera module is based on either an OV2640 or a GC0328 sensor, both of which provide a rolling-shutter RGB output suitable for embedded vision tasks. The OV2640 variant offers a native resolution of up to 2 megapixels, while the GC0328 is a lower-resolution, cost-optimized option.

The module integrates a 2.0-inch IPS LCD with a resolution of 320 × 240 pixels. This display is tightly coupled to the vision pipeline and is used to render the live camera feed, overlay detection bounding boxes, IDs, and confidence indicators, as well as to show function names and configuration menus.
The display interface is driven directly by the K210, which allows the device to present an immediate visual explanation of what the vision algorithms are detecting, without any host interaction.
Power Supply and Electrical Characteristics
Electrically, HUSKYLENS is designed to operate from typical 3.3-volt and 5-volt embedded power rails. The specified supply voltage range is 3.3 V to 5.0 V, and both the Gravity 4-pin connector and the micro-USB connector can be used as power inputs. An onboard automatic power-source selection circuit gives priority to the USB input when both are connected.
In typical face-recognition operation with the LCD backlight set to 80% brightness and the fill light disabled, the module draws approximately 320 mA at 3.3 V or 230 mA at 5.0 V. This current consumption is driven by the continuous operation of the K210’s neural network accelerator and the active display.
Communication Interfaces
The HUSKYLENS exposes two communication interfaces for integration with microcontrollers: UART and I2C. These interfaces are routed to the standard DFRobot Gravity 4-pin connector.
In UART mode, the four pins correspond to TX, RX, ground, and VCC, while in I2C mode they correspond to SDA, SCL, ground, and VCC. The following two tables show the pin configuration for the UART and the I2C mode:
| Num | Label | Pin Function | Description |
|---|---|---|---|
| 1 | T | TX | TX pin of HuskyLens |
| 2 | R | RX | RX pin of HuskyLens |
| 3 | – | GND | Negative (0V) |
| 4 | + | VCC | Positive (3.3~5.0V) |
| Num | Label | Pin Function | Description |
|---|---|---|---|
| 1 | T | SDA | Serial data line |
| 2 | R | SCL | Serial clock line |
| 3 | – | GND | Negative (0V) |
| 4 | + | VCC | Positive (3.3~5.0V) |
Both interface modes support the full command protocol for configuring algorithms, triggering learning operations, and reading back detection results such as bounding-box coordinates and IDs.
Onboard User Interface and Controls
The module includes a local user interface implemented with two hardware buttons and the integrated display. The “function” button acts primarily as a mode selector and configuration control. When the user rotates or “dials” this button left or right, the firmware cycles through the available vision functions, updating the screen to indicate the current algorithm.

A long press on the function button opens a second-level parameter menu for the active function. There you can adjust parameters such as sensitivity, recognition thresholds, LED settings, and multi-object learning options without any external controller
The “learning” button is tightly bound to the internal training routines of each algorithm. A short press instructs the device to learn the object or color currently centered under the on-screen crosshair.
A long press enables continuous learning over multiple frames. This allows the system to build a more robust model over variations in distance, angle, or illumination. When an existing learned target is present, another short press can be used to clear its associated model data.
Connecting HUSKYLENS to Arduino UNO
You can communicate with the HUSKYLENS using the UART or the I2C protocol. I2C is faster and allows you to connect multiple devices to the same bus. We are therefore going use I2C.
The Gravity connector of the HUSKYLENS exposes the I2C interface (SDA(T), SCL(R)) and power supply pins (VCC(+), GND(-)). See the photo below shows you how to wire the HUSKLENS with an Arduino UNO:

The red wire needs to be connected to the 5V pin of the Arduino, and the black wire to GND. The green wire is SDA and should be connected to A4. And the blue wire is SCL and should be connected to A5 of the Arduino.
Note that you have to set the communications protocol to I2C in the settings of the HUSKYLENS. Go to “General Settings”, then “Protocol Type”, select “I2C” and save the settings as shown below:

If you need more detailed instructions read the HUSKYLENS Wiki.
Installing HUSKYLENS library
To be able to retrieve detection results from the HUSKYLENS we need to install the HUSKYLENS Library. Click on the link to download the library as a “HUSKYLENSArduino-master.zip” file to your computer. Then unpack the ZIP file to extract it contents. You should see the following files in the unpacked folder:

Next we need to copy the folder “HUSKYLENS” folder into the “libraries” folder for the Arduino IDE. Under Windows the “libraries” folder is typically located under:
C:\Users\<username>\OneDrive\Documents\Arduino\libraries
Since this folder already contains installed libraries, I recommend you temporarily rename it, e.g. to “_libraries” and create a new folder named “libraries”. This way you avoid conflicts with your already installed libraries and you don’t loose them either. Later you can easily revert this change. The picture below shows how your “Arduino” folder with the libraries should look like:

Now we can copy the folder “HUSKYLENS” folder into the new “libraries” folder as shown below:

The “libraries” folder can contain other libraries but we don’t need any for this project.
Code Example: Line Tracking
In the first code example we are going to try out the Line Tracking algorithm. It has the advantage that you don’t have to to train the HUSKYLENS to detect lines, since that function works without any learning.

The following code connects the Arduino to the HUSKYLENS via I2C and the prints out the start and end points of detected line segments. Have a quick look at the code first and then we discuss the details.
#include "HUSKYLENS.h"
HUSKYLENS huskylens;
void setup() {
Serial.begin(115200);
Wire.begin();
if (!huskylens.begin(Wire)) {
Serial.println("Can't connect!");
}
huskylens.writeAlgorithm(ALGORITHM_LINE_TRACKING);
}
void loop() {
static char text[100];
huskylens.request();
while (huskylens.available()) {
HUSKYLENSResult r = huskylens.read();
sprintf(text, "x=%d, y=%d, w=%d, h=%d",
r.xOrigin, r.yOrigin, r.xTarget, r.yTarget);
Serial.println(text);
delay(1000);
}
}
Imports
We begin by including the HUSKYLENS library using the directive #include "HUSKYLENS.h". This import makes all class definitions, constants and communication routines required to interface with the HUSKYLENS AI camera available to the sketch.
#include "HUSKYLENS.h"
Note that this library currently (Dec 2025) does not compile for the ESP32 core.
Object Instantiation
After the include directive, we create a global instance of the HUSKYLENS class. The object named huskylens holds the internal state of the device interface and exposes the API used throughout the program.
HUSKYLENS huskylens;
Setup
The setup() function initializes the board’s serial interface, the I2C bus and the HUSKYLENS device itself.
The program calls Serial.begin(115200) to configure serial output at 115200 baud. Next I2C communication is activated through Wire.begin(). Then we start communication with the device by calling huskylens.begin(Wire). This method configures the internal driver, sets the I2C address and verifies that the module responds. If initialization fails, the program prints an error message to the serial console.
After initialization, we select which built-in vision algorithm the device should run. In this example, we activate the line tracking algorithm using huskylens.writeAlgorithm(ALGORITHM_LINE_TRACKING).
void setup() {
Serial.begin(115200);
Wire.begin();
if (!huskylens.begin(Wire)) {
Serial.println("Can't connect!");
}
huskylens.writeAlgorithm(ALGORITHM_LINE_TRACKING);
}
The constants for other AI algorithms can be found in the header file of the library and are as follows:
ALGORITHM_FACE_RECOGNITION ALGORITHM_OBJECT_TRACKING ALGORITHM_OBJECT_RECOGNITION ALGORITHM_LINE_TRACKING ALGORITHM_COLOR_RECOGNITION ALGORITHM_TAG_RECOGNITION ALGORITHM_OBJECT_CLASSIFICATION
Loop
In the loop() function we perform continuous data acquisition from the sensor. A static character buffer named text is declared to hold formatted output messages. Declaring it as static ensures that the memory is allocated only once.
The sketch issues a data request to the HUSKYLENS device by calling huskylens.request(). This method prompts the camera to send the most recent set of recognition or tracking results.
After issuing the request, We enter a loop that continues to run as long as the device reports available results. Each iteration retrieves one result object using huskylens.read(). The returned HUSKYLENSResult structure contains geometric information about the detected element. In line tracking mode, these values correspond to the center origin (xOrigin, yOrigin) and the end point or target coordinate (xTarget, yTarget) of the line segment detected.
void loop() {
static char text[100];
huskylens.request();
while (huskylens.available()) {
HUSKYLENSResult r = huskylens.read();
sprintf(text, "x=%d, y=%d, w=%d, h=%d",
r.xOrigin, r.yOrigin, r.xTarget, r.yTarget);
Serial.println(text);
delay(1000);
}
}
We format these four integer values into a human-readable message using sprintf and then print it to the serial monitor. On the Serial Monitor you should see data printed similar to the screenshot below:

Note that the default line detection is rather brittle and even for a clearly visible line does not work reliably. You can improve the accuracy by training the line tracking algorithm.
Line Angle
Often we want to know the angle of the line, for instance, to control the driving direction of a robot. Here is a function that allows you to compute that angle:
float calcAngle(const HUSKYLENSResult &r) {
float dx = (float)r.xTarget - (float)r.xOrigin;
float dy = (float)r.yTarget - (float)r.yOrigin;
float angleRad = atan2(dy, dx);
float angleDeg = angleRad * 180.0 / PI + 90;
return angleDeg;
}
If the line is straight forward the function returns an angle of 0 degrees. If the line points to the left the angle will be negative and otherwise positive. You can retrieve and print the angle with the following code.
float angle = calcAngle(r); Serial.println(angle);
In the next section we perform Object Classification.
Code Example: Object Classification
For this example you first need to let the HUSKLENS learn two objects. See the instructions in the DFRobot Wiki for Object Classification for details. I used a little Santa figure and a toy car to train my HUSKYLENS:

Once you have learned two objects you can use the following code to run the object detection and print the names of the detected objects to the Serial Monitor:
// (c) www.makerguides.com
#include "HUSKYLENS.h"
const char* names[] = {"", "Santa", "Car"};
HUSKYLENS huskylens;
void setup() {
Serial.begin(115200);
Wire.begin();
if (!huskylens.begin(Wire)) {
Serial.println("Can't connect!");
}
huskylens.writeAlgorithm(ALGORITHM_OBJECT_CLASSIFICATION);
huskylens.setCustomName(names[1], 1);
huskylens.setCustomName(names[2], 2);
}
void loop() {
huskylens.request();
while (huskylens.available()) {
HUSKYLENSResult r = huskylens.read();
Serial.println(names[r.ID]);
delay(1000);
}
}
Imports
As before we start by including the HUSKYLENS library:
#include "HUSKYLENS.h"
Constants
Next we define a constant array named names. This array contains human-readable labels that correspond to classification IDs generated by the HUSKYLENS device. The first entry is an empty string because HUSKYLENS classification IDs start at one. Index one maps to the string “Santa” and index two maps to “Car”.
const char* names[] = {"", "Santa", "Car"};
Object Instantiation
Then we create the HUSKYLENS object named huskylens.
HUSKYLENS huskylens;
Setup
In the setup() function we initialize serial communication, the I2C interface and the HUSKYLENS sensor.
The next line initializes the I2C subsystem using Wire.begin(). The call to huskylens.begin(Wire) configures the device driver and checks whether the sensor acknowledges on the bus. If this initialization fails, the program prints an error message.
After establishing communication, the program selects the object classification algorithm by calling huskylens.writeAlgorithm(ALGORITHM_OBJECT_CLASSIFICATION). Otherwise you would have to manually select the object classification algorithm on the HUSKLENS.
void setup() {
Serial.begin(115200);
Wire.begin();
if (!huskylens.begin(Wire)) {
Serial.println("Can't connect!");
}
huskylens.writeAlgorithm(ALGORITHM_OBJECT_CLASSIFICATION);
huskylens.setCustomName(names[1], 1);
huskylens.setCustomName(names[2], 2);
}
The next two lines assign custom names to IDs. The function huskylens.setCustomName(names[1], 1) associates the string “Santa” with classification ID 1. And the next line performs the same operation for ID 2, assigning the name “Car”. These custom labels are displayed on the screen of the HUSKLENS when an object is recognized. See the screenshots below:

Loop
The loop() function handles data retrieval from the classification engine. Each iteration begins by requesting the most recent results from the sensor with the call huskylens.request().
The subsequent while loop processes all results currently available. The method huskylens.available() indicates whether at least one result structure is ready to be read. If a result is present, the sketch retrieves it using huskylens.read(), which returns a HUSKYLENSResult object.
The field r.ID contains the classification ID assigned by the module. The program uses this ID as an index into the names array and prints the associated label to the serial monitor. A one-second delay follows to slow the output, which makes the printed results easier to observe.
void loop() {
huskylens.request();
while (huskylens.available()) {
HUSKYLENSResult r = huskylens.read();
Serial.println(names[r.ID]);
delay(1000);
}
}
The object classifier does not return a confidence score, which means you cannot filter for low confidence detections. Even if no object is in front of the HUSKLENS the ID for the first class is returned. Consequently background is detected as “Santa” in my case. You will need to train a separate “background class” to get around that.

Also note that the object classifier does not return object location information (x, y, w, h), which is different from the face detection or object tracking algorithms, for instance.
Conclusions
This tutorial showed you how get started with the HUSKYLENS AI Vison sensor. You learned how to connect it to an Arduino and how to retrieve detection results from the built-in AI algorithms. I recommend, you also read the HUSKYLENS Wiki by DFRobot for more code examples and instructions.
The HUSKYLENS makes it easy to try out AI various algorithms such as Face recognition, Object classification, Object tracking, Line Tracking and others. No complex code for training is required, however, the detection accuracy and the number of objects that can be detected is limited.
If you need more powerful AI algorithms and the option to deploy your own AI models on the device have a look at the successor of the HUSKYLENS, the HUSKYLENS 2. It is more expensive and has a higher power consumption, but has a wider selection of AI algorithms with better accuracies.
Also the library to communicate with the HUSKYLENS 2 runs on Arduino and ESP32 platforms, while the HUSKYLENS library runs only on the Arduino.
Both, the HUSKYLENS and the HUSKYLENS 2 have the advantage that you can run AI models locally on the device. No need for a Wi-Fi connection to a cloud service with potentially high latencies or connection issues.
If you have any questions feel free to leave them in the comment section.
Happy Tinkering 😉
Stefan is a professional software developer and researcher. He has worked in robotics, bioinformatics, image/audio processing and education at Siemens, IBM and Google. He specializes in AI and machine learning and has a keen interest in DIY projects involving Arduino and 3D printing.

