Voice-enabled interaction is increasingly central to modern embedded systems, robotics and smart devices. The Gravity Voice Recognition Module from DFRobot provides a compact, offline-capable module designed for makers and developers working with platforms such as Arduino, micro:bit and ESP32. This module supports both I²C and UART communication, features 121 pre-programmed voice commands and allows up to 17 custom phrases to be trained locally — all without requiring an internet connection.
In this guide, we’ll walk through the key features of the module, prepare the necessary hardware and software setup, and then demonstrate how to integrate the module into a simple voice-controlled project. Whether your goal is a voice-activated light system, a robotic assistant or a voice-triggered automation routine, this post will serve as a practical starting point to get the Gravity Voice Recognition Module up and running under your own control.
Where to Buy
You can get the Gravity Voice Recognition Module at DFRobot or Amazon. The link below is for the product on Amazon. Furthermore, you will need a microcontroller. I am using an Arduino UNO but most common microcontrollers (e.g. ESP32, ESP8266, …) will work as well. And if you want to try out some projects a breadboard and some cables will come in handy.

Gravity Voice Recognition Module

Arduino Uno

USB Cable for Arduino UNO

Dupont Wire Set

Breadboard
Makerguides is a participant in affiliate advertising programs designed to provide a means for sites to earn advertising fees by linking to Amazon, AliExpress, Elecrow, and other sites. As an Affiliate we may earn from qualifying purchases.
Hardware of the Gravity Voice Recognition Module
The Gravity Voice Recognition Module from DFRobot is a compact, self-contained speech processing unit that enables offline voice control for embedded systems. At its core, the module integrates a powerful speech recognition chip capable of performing local command analysis without relying on cloud services. This allows for rapid response, low latency, and complete independence from network connectivity.
Components
The Module contains a small onboard speaker and a connector for an external speaker (8 Ω 3W). A small switch allows you to switch between the two speakers. A second switch allows you to switch between I2C and UART communication with the module.
Furthmore, there are two microphones and two indicator LEDs on the board. One LED (red) indicates power and the other (blue) LED signals that wake-up word was recognized and that the module is ready to receive voice commands. The picture below shows the components of the module:

Versions
Note that there are two versions of the module; Version 1.0 and Version 1.1. The older version 1.0 lacks a connector for the onboard speaker and two mounting holes. Otherwise the hardware and programming is identical, however. The photo below shows the two module version:

Power supply
The module operates within a typical voltage range of 3.3V to 5V, ensuring compatibility with both 3.3V logic systems (such as ESP32 and Raspberry Pi Pico) and 5V systems (such as Arduino Uno). The compact form factor (49 mm × 32 mm) makes it suitable for integration into space-constrained enclosures.
Commands
The module has 121 pre-programmed commands, like “Turn the light on”, that cannot be changed. There is also one wake-up word (“Hello Robot”) that is fixed. There are 17 slots for command words that can be learned, and one slot for a learnable wake-up word. For a full list of pre-programmed commands see the DFRobot Wiki for the Gravity Voice Recognition Module.
The module wakes up when the wake-up word is recognized, then stays active for a programmable time and during that time is ready to detect voice commands.
Technical Specifications
| Parameter | Specification |
|---|---|
| Product Name | Gravity Voice Recognition Module |
| Power Supply Voltage | 3.3 V – 5 V |
| Communication Interface | I²C and UART (selectable) |
| I2C Address | 0x64 |
| Recognition Mode | Offline voice recognition |
| Built-in Commands | 121 pre-programmed |
| Custom Commands | Up to 17 user-defined |
| Learning Activation Command | 1 |
| Microphone | 2 x built-in |
| Microphone Sensitivity | -28dB |
| Speaker | Built in, with optional external speaker |
| Operating Current | ≤ 370 mA @ 5V |
| Operating Temperature | 0 °C to +70 °C |
| Dimensions | 49 mm × 32 mm |
| Port | Gravity 4-pin interface (VCC, GND, SDA/RX, SCL/TX) |
Connecting the Voice Recognition Module to an Arduino UNO
Connecting the Gravity Voice Recognition Module to an Arduino UNO is simple. Connect GND of the Gravity Module to GND of the Arduino and VCC to 5V.
Next connect we connect the I2C interface. The SCL line at pin A5 of the Arduino needs to be connected to the C/R pin of the Gravity Module (green wire). Finally, we need to connect SDA (A4) from the Arduino to D/T of the Gravity Module. The picture below shows the complete wiring:

In the next section we will write some code to try out the Voice Recognition Module.
Install Library for Gravity Voice Recognition Module
Before we can use the Gravity Voice Recognition Module with an Arduino or ESP32 we first need to install the DFRobot_DF2301Q Library in the Arduino IDE. Just open the LIBRARY MANAGER, enter DFRobot_DF2301Q in the search bar and press INSTALL:

Code Example: Turn built-in LED on or off
In this first code example we will switch the built-in LED of the Arduino UNO on or off using the pre-programmed voice commands “Turn on the light” and “Turn off the light”. Since we are using the pre-programmed commands no learning or other preparation of the Voice Recognition Module is required here. Have a quick look at the code first and then we will discuss its details.
#include "DFRobot_DF2301Q.h"
const byte led = LED_BUILTIN;
DFRobot_DF2301Q_I2C asr;
void setup() {
Serial.begin(115200);
pinMode(led, OUTPUT);
digitalWrite(led, LOW);
while (!(asr.begin())) {
Serial.println("Can't initialize ASR");
delay(3000);
}
asr.setVolume(6);
asr.setMuteMode(0); // 1=Mute
asr.setWakeTime(20);
}
void loop() {
uint8_t cmd_id = asr.getCMDID();
if (cmd_id==103) {
digitalWrite(led, HIGH);
Serial.println("Turn on the light");
}
if (cmd_id==104) {
digitalWrite(led, LOW);
Serial.println("Turn off the light");
}
if (cmd_id != 0) {
Serial.print("cmd_id = ");
Serial.println(cmd_id);
}
delay(300);
}
Imports
First, we include the header file for the DF2301Q library:
#include "DFRobot_DF2301Q.h"
This library provides all the necessary functions to initialize, configure, and communicate with the DF2301Q voice recognition module. It handles I²C communication and simplifies command recognition and configuration management.
Constants
In the following line, we define a constant for the LED pin:
const byte led = LED_BUILTIN;
The LED_BUILTIN macro refers to the onboard LED of the development board, which is typically connected to pin 13 on most Arduino boards. By assigning this value to the led constant, the code becomes more readable and easier to adapt if you want to use a different LED pin.
Objects
Next an instance of the DFRobot_DF2301Q_I2C class is created, which allows the Arduino to communicate with the Gravity Voice Recognition Module via I2C:
DFRobot_DF2301Q_I2C asr;
Setup
The setup() function runs once at the beginning of the program and prepares both the microcontroller and the DF2301Q module.
void setup() {
Serial.begin(115200);
pinMode(led, OUTPUT);
digitalWrite(led, HIGH);
while (!(asr.begin())) {
Serial.println("Can't initialize ASR");
delay(3000);
}
asr.setVolume(6);
asr.setMuteMode(0); // 1=Mute
asr.setWakeTime(20);
}
First we initialize the serial communication at 115200 baud. This allows the Arduino to send diagnostic information to the Serial Monitor. Next, the LED pin is configured as an output, and its state is set to LOW, ensuring the LED is initially turned off.
The program then attempts to initialize the DF2301Q module using asr.begin(). The function returns true if the module is successfully detected over I2C. If not, the message “Can’t initialize ASR“ is printed every 3 seconds until initialization succeeds.
After initialization, several configuration commands are issued. The asr.setVolume(6) sets the speaker or recognition volume to level 6 (on a scale from 1 to 10).
With the asr.setMuteMode(0) command you can enable or disable audio feedback. A value of 0 means, mute is disabled and the module will confirm recognized commands with “OK”, “Yes, I am here”, or “I’am off now”, when exiting wake mode.
Finally, asr.setWakeTime(20) defines the module’s wake time, or the duration it remains active after detecting a wake-up word. The number seems to be the time in seconds (ranging from 0 to 255) but I could not find specific information in the documentation.
Loop
The loop() function runs continuously after setup and handles the main logic of the program.
void loop() {
uint8_t cmd_id = asr.getCMDID();
if (cmd_id==103) {
digitalWrite(led, HIGH);
Serial.println("Turn on the light");
}
if (cmd_id==104) {
digitalWrite(led, LOW);
Serial.println("Turn off the light");
}
if (cmd_id != 0) {
Serial.print("cmd_id = ");
Serial.println(cmd_id);
}
delay(300);
}
At the start of each loop iteration, the function asr.getCMDID() retrieves the latest command ID recognized by the DF2301Q module. Each recognized voice command is associated with a unique numeric ID. See the DFRobot Wiki for the Gravity Voice Recognition Module for a list of commands and their IDs.
If the returned command ID equals 103, the LED is turned on by setting the output pin HIGH, and the message “Turn on the light“ is printed to the Serial Monitor.
If the command ID equals 104, the LED is turned off by setting the output pin LOW. And the message “Turn off the light“ is printed.
The following conditional block prints the detected cmd_id value whenever it is nonzero, which helps monitor the commands being received.
Finally, a delay(300) pauses the loop for 300 milliseconds before checking again, giving time for the next command to be recognized.
Running the Code
After uploading the code to your Arduino you can now control the built-in LED of the Arduino with your voice. Start by saying “Hello Robot” to activate the wake mode. The blue LED on the Voice Recognition module should switch on, and the module will say “How, can I help” or “Yes, I am here”.
While in wake mode you can now say “Turn on the light” or “Turn off the light” to switch the built-in LED on or off. If no command is received for a while (wake time) the module exits wake mode and says “I’m off now”. You will have to say “Hello Robot” again, to wake up the module.
On the Serial Monitor you should see the command ids (2 = wake-up) and the text “Turn on the light“ or “Turn off the light“ printed:
cmd_id = 2 Turn on the light cmd_id = 103 Turn off the light cmd_id = 104
Code Example: Control Audio Feedback
In this next example, we still switch the built-in LED on or off but we will control the audio feedback ourselves.
If you call asr.setMuteMode(1), the module is muted and does no provide audio feedback automatically. However, you can call asr.playByCMDID(id) to play certain phrases such as “How can I help / yes, I am here” or “Done”.
Unfortunately, I could not find any documentation on the supported phrases and their ids. But I identified the following three useful ids:
| id | phrase(s) |
|---|---|
| 1 | How can I help / Yes, I am here |
| 5 | Ok, got it / Ok / doing it / done |
| 23 | Done |
Note that for ids 1 and 5 the module randomly choses one of the phrases listed in the table.
In the following code example we switch off the automatic audio feedback, and say “Done” when a command was executed, and “How can I help / Yes, I am here”, when the wake-up word was detected:
#include "DFRobot_DF2301Q.h"
const byte led = LED_BUILTIN;
DFRobot_DF2301Q_I2C asr;
void speak(uint8_t id) {
asr.setMuteMode(0);
asr.playByCMDID(id);
delay(100);
asr.setMuteMode(1);
}
void setup() {
Serial.begin(115200);
pinMode(led, OUTPUT);
digitalWrite(led, LOW);
while (!(asr.begin())) {
Serial.println("Can't initialize ASR");
delay(3000);
}
asr.setVolume(6);
asr.setMuteMode(1); // 1=Mute
asr.setWakeTime(20);
}
void loop() {
uint8_t cmd_id = asr.getCMDID();
switch (cmd_id) {
case 2:
speak(1); // How can I help / yes, I am here
Serial.println("Waking up");
break;
case 103:
digitalWrite(led, HIGH);
Serial.println("Turn on the light");
speak(23); // Done
break;
case 104:
digitalWrite(led, LOW);
Serial.println("Turn off the light");
speak(23); // Done
break;
default:
if (cmd_id != 0) {
Serial.print("CMDID = ");
Serial.println(cmd_id);
}
}
delay(300);
}
The code is very similar to the previous code with the exception of the speak() function. It takes the id of a phrase (e.g. 23 = “Done”), temporarily disables mute, calls playByCMDID(id) to utter the phrase and then mutes the module again.
void speak(uint8_t id) {
asr.setMuteMode(0);
asr.playByCMDID(id);
delay(100);
asr.setMuteMode(1);
}
So, if the Voice Recognition Module is a bit too chatty for you, this allows you to control the audio feedback yourself.
Code Example: Control External Devices with Learned Commands
In this final example we will control two external devices (red and green LEDs) using learned commands. Instead of LEDs you could connect relays to switch high-powered devices, but for this example we use LEDs.
First let us wire up the LEDs. Both LEDs are connected to ground (GND). The green LED will be connected to GPIO11 and the red LED will be connected to GPIO12 via a 220 Ohm resistor. See the wiring diagram below:

Next we teach the command phrases to use for switching the LEDs.
Learning command words
I want to use the following four phrases to control the red and the green LED:
- “Turn on red light“
- “Turn off red light“
- “Turn on green light“
- “Turn off green light“
To start learning, first say the wake-up word “Hello Robot“. Next enter learning mode by saying “Learning command word“. The module will then guide you through the learning process with the following steps:
- Indication: Learning now, be quiet, please learn the command word according to the prompt! Please say the first command to be learned!
- Command phrase to be learned: “Turn on red light“
- Indication: Learning successful, please say it again!
- Command phrase to be learned : “Turn on red light“
- Indication: Learning successful, please say it again!
- Command phrase to be learned : “Turn on red light“
- Indication: OK, learned the first command successfully! Please say the second command to be learned!
- …
You can exit the learning mode by saying “Exit learning“.
Code
If the learning of the four new command phrases was successful, you can then control the red and green LED using the following code:
#include "DFRobot_DF2301Q.h"
const byte redLed = 12;
const byte greenLed = 11;
DFRobot_DF2301Q_I2C asr;
void setup() {
Serial.begin(115200);
pinMode(redLed, OUTPUT);
digitalWrite(redLed, LOW);
pinMode(greenLed, OUTPUT);
digitalWrite(greenLed, LOW);
while (!(asr.begin())) {
Serial.println("Can't initialize ASR");
delay(3000);
}
asr.setVolume(6);
asr.setMuteMode(0); // 1=Mute
asr.setWakeTime(20);
}
void loop() {
uint8_t cmd_id = asr.getCMDID();
switch (cmd_id) {
case 5: // Turn on red light
digitalWrite(redLed, HIGH);
break;
case 6: // Turn off red light
digitalWrite(redLed, LOW);
break;
case 7: // Turn on green light
digitalWrite(greenLed, HIGH);
break;
case 8: // Turn off green light
digitalWrite(greenLed, LOW);
break;
}
if (cmd_id != 0) {
Serial.print("CMDID = ");
Serial.println(cmd_id);
}
delay(300);
}
Again, the code is similar to the previous examples. The only difference is in the main function, where we now check for the command ids 5, 6, 7, and 8 that correspond to the learned commands:
void loop() {
uint8_t cmd_id = asr.getCMDID();
switch (cmd_id) {
case 5: // Turn on red light
digitalWrite(redLed, HIGH);
break;
case 6: // Turn off red light
digitalWrite(redLed, LOW);
break;
case 7: // Turn on green light
digitalWrite(greenLed, HIGH);
break;
case 8: // Turn off green light
digitalWrite(greenLed, LOW);
break;
}
if (cmd_id != 0) {
Serial.print("CMDID = ");
Serial.println(cmd_id);
}
delay(300);
}
If you have already learned other command phrases or you trained in different order your ids will be different. However, the code prints out the recognized phrase id and you can change your code accordingly.
Obviously, you can also delete learned phrases or learn a new wake-up word. For details see the Wiki Gravity Voice Recognition Module. The following table provides you with an overview of the important control phrases for learning, editing and deleting:
| Command | Function |
| Hello Robot | Default system wake word. |
| Learning wake word | Change the wake word. |
| Learning command word | Teaches it a new command. |
| Re-learn | Replaces a command with another. |
| Exit learning | Say this to exit learning. |
| I want to delete | Enters the delete function. |
| Delete wake word | Erase the learned awakening word. |
| Delete command word | Removes a previously acquired command phrase. |
| Delete all | Deletes all commands and phrases from memory. |
| Exit deleting | Exits the delete function |
And that’s it! Now you have three examples that should help you with the Gravity Voice Recognition Module.
Conclusions
This tutorial provided you with code examples to get started with the Gravity Voice Recognition Module. For a full list of the pre-programmed voice commands and how to train the module see the Wiki.
The Gravity Voice Recognition Module makes it very easy to get started with voice recognition. Just connect the module to a microcontroller, write some minimal code and you are ready to go.
Learning new commands and wake-up words is also fully achieved via voice control. That makes it easy to add new commands but it can be cumbersome to delete or edit commands.
Also, while the recognition accuracy of the pre-programmed voice commands is high the recognition accuracy of learned commands seems to be a good deal lower. I had difficulties getting the system to reliably recognize commands like “Turn on the red light”, which were often mistaken as “Turn on the light”.
An alternative to the Gravity Voice Recognition Module could be the Voice Recognition Module V3. It has no pre-programmed phrases but can learn 80 words and allows programmatic control over the learned words. But it is more difficult to set up and I feel the recognition accuracy is lower.
If you really want high recognition accuracy you could try Voice control with XIAO-ESP32-S3-Sense and Edge Impulse. In this case you can train the voice recognition completely yourself, which might give you better recognition accuracies but it is also a lot more complex and work!
If you have any questions feel free to leave them in the comment section.
Happy Tinkering 😉
Stefan is a professional software developer and researcher. He has worked in robotics, bioinformatics, image/audio processing and education at Siemens, IBM and Google. He specializes in AI and machine learning and has a keen interest in DIY projects involving Arduino and 3D printing.

