Welcome to this comprehensive guide on how to create an easy video calling app on React Native using WebRTC and Firebase. In this tutorial, I will take you through the step-by-step process to develop your very own real-time video communication platform.
Video calling has become an integral part of modern communication, and with React Native’s cross-platform capabilities and the power of WebRTC for real-time media streaming, you can build a feature-rich app that connects users from different devices seamlessly.
We’ll leverage Firebase, a powerful cloud-based platform, signaling, and data storage, allowing us to focus on the core functionality of our app without worrying about complex backend configurations.
Throughout this tutorial, you’ll learn how to set up a react native development environment, configure firebase database with firestore, integrate nativewind css, establish peer-to-peer connections through WebRTC, enable real-time video streaming, and manage call states effectively.
So, let’s get started on this exciting journey of building your own video calling app using React Native, WebRTC, Firebase, and Tailwind!
What are we going to use?
React Native is an open-source mobile application framework developed by Facebook. It allows developers to build cross-platform mobile apps using JavaScript and React, a popular frontend library. It serves as the foundation for our video calling app. It enables us to create a single codebase that runs on both iOS and Android devices, saving us from the need to develop separate applications for each platform.
WebRTC is a collection of open-source APIs and communication protocols that enable real-time peer-to-peer communication between web browsers and mobile applications. It provides capabilities for audio and video calling, as well as data sharing, without the need for plugins or additional software installations.
Firebase, as a real-time database and cloud-based platform, offers the ideal infrastructure to handle the signaling process in WebRTC. When a user wants to initiate a video call with another user, they need to send signaling messages to establish a connection. Firebase’s real-time database provides a mechanism for sending and receiving these signaling messages in real-time.
Tailwind CSS is the one we will be using to design components instead of stylesheet. It is a utility-first CSS framework that provides a large set of pre-built utility classes that you can directly apply to React native elements. This approach promotes efficient way of styling components in React Native similar to what Tailwind CSS offers for web development.
Setting up React Native Expo
In this tutorial, we will be using React Native Expo instead of a bare workflow project. However, please note that the WebRTC module includes native code, which means it will not function properly on the Expo Go app by default. To overcome this, we will create a development build using EAS build. I will guide you through the process at the end of the package installations.
Once you have set up React Native Expo, the first step is to install the necessary packages required for this project, including WebRTC, Firebase, and Tailwind. I will demonstrate how to install these packages as we progress through the subsequent sections of the tutorial.
Setting up WebRTC on React Native
Setting up WebRTC on React Native involves several steps. Below are the instructions to get started with WebRTC in your React Native project:
- First install the package with yarn, npm, or npx expo install.
npx expo install react-native-webrtc @config-plugins/react-native-webrtc
2. After installing this npm package, add the config plugin to the plugins array of your app.json or app.config.js:
{
"expo": {
"plugins": ["@config-plugins/react-native-webrtc"]
}
}3. That’s it ! You can use this reference as a guide.
Setting up Firebase on React Native
Next, we will use Firebase as our Signaling server. Follow the steps below to get started:
- Log in to your Firebase account and create a new project. From the Firebase console, initialize Cloud Firestore and create a database in test mode.
- Once you have completed this step, go to project settings > general > your apps, and register your app. After registering, you will receive necessary credentials.
- Install Firebase using npm with the following command:
npm install firebase
4. Now, create a new file named firebase.js and paste your own Firebase SDK configuration there, just like this:
//firebase.js
// Import the functions you need from the SDKs you need
import { initializeApp } from "firebase/app";
// import { getAnalytics } from "firebase/analytics";
// import { getFirestore } from "firebase/firestore";
import { initializeFirestore } from "firebase/firestore";
// TODO: Add SDKs for Firebase products that you want to use
// https://firebase.google.com/docs/web/setup#available-libraries
// Your web app's Firebase configuration
// For Firebase JS SDK v7.20.0 and later, measurementId is optional
const firebaseConfig = {
apiKey: "Insert here your FIREBASE_API_KEY",
authDomain: "Insert here your FIREBASE_AUTH_DOMAIN",
projectId: "Insert here your FIREBASE_PROJECT_ID",
storageBucket: "Insert here your FIREBASE_STORAGE_BUCKET",
messagingSenderId: "Insert here your FIREBASE_MESSAGING_SENDER_ID",
appId: "Insert here your FIREBASE_APP_ID",
measurementId: "Insert here your FIREBASE_MEASUREMENT_ID",
};
// Initialize Firebase
const app = initializeApp(firebaseConfig);
// const analytics = getAnalytics(app);
export const db = initializeFirestore(app, {
experimentalForceLongPolling: true,
});
5. That’s it! You can follow this video from fireship as a guide.
Setting up Tailwind/Nativewind on React Native
Let’s install dependencies for Nativewind CSS from the documentation.
- You will need to install nativewind and its peer dependency tailwindcss.
npm install nativewind
npm install --dev tailwindcss
2. Run npx tailwindcss init to create a tailwind.config.js file. Add the paths to all of your component files in your tailwind.config.js file.
//tailwind.config.js
module.exports = {
content: ["./App.{js,jsx,ts,tsx}", "./screens/**/*.{js,jsx,ts,tsx}"],
theme: {
extend: {},
},
plugins: [],
}
3. Modify your babel.config.js
module.exports = {
plugins: ["nativewind/babel"],
};
3. Modify your babel.config.js
//babel.config.js
module.exports = {
plugins: ["nativewind/babel"],
};
4. That’s it! Make sure to include the folders you are going to create inside the content, so you can use Nativewind in your files. For their documentation, you can use this link directly:
Setting up EAS Build
- Install EAS CLI as a global npm dependency by running the following command:
npm install -g eas-cli
2. To initialize a development build, you need to install the expo-dev-client library in your project:
npx expo install expo-dev-client
3. Login to your expo account with this command:
eas login
4. Initialize EAS Build by running the eas build command to create eas.json.
//eas.json
{
"build": {
"development": {
"developmentClient": true,
"distribution": "internal"
},
"preview": {
"distribution": "internal"
},
"production": {}
}
}
5. Now we’re going to create and install development build on Android Device. Each platform has specific instructions you’ll have to follow:
eas build --profile development --platform android
6. Go to the Expo website and log in to view your build. Wait for it to finish. Once it’s finished, you will need to install it on your Android device and run the following command in the terminal:
Expo start --dev-client
7. Scan the QR code with your camera, and the app will open using the development build you’ve just installed. These instructions are referenced from this link:
So, this are all the package dependencies I’ve used on this project. You can copy this on your dependencies and run the command npm install .
//Package.json
"dependencies": {
"@config-plugins/react-native-webrtc": "^6.0.0",
"@react-navigation/native": "^6.1.7",
"expo": "~48.0.9",
"expo-dev-client": "~2.1.6",
"expo-status-bar": "~1.4.4",
"firebase": "^9.18.0",
"nativewind": "^2.0.11",
"postcss": "^8.4.23",
"react": "18.2.0",
"react-native": "0.71.4",
"react-native-vector-icons": "^10.0.0",
"react-native-webrtc": "^106.0.7",
"tailwindcss": "^3.3.2"
},
App.js
This is where the screens are being handled. By default, the first screen to render for the users is the Room Screen.
//App.js
import React, { useState } from "react";
import { Text, SafeAreaView } from "react-native";
import RoomScreen from "./screens/RoomScreen";
import CallScreen from "./screens/CallScreen";
import JoinScreen from "./screens/JoinScreen";
// Just to handle navigation
export default function App() {
const screens = {
ROOM: "JOIN_ROOM",
CALL: "CALL",
JOIN: "JOIN",
};
const [screen, setScreen] = useState(screens.ROOM);
const [roomId, setRoomId] = useState("");
let content;
switch (screen) {
case screens.ROOM:
content = (
<RoomScreen
roomId={roomId}
setRoomId={setRoomId}
screens={screens}
setScreen={setScreen}
/>
);
break;
case screens.CALL:
content = (
<CallScreen roomId={roomId} screens={screens} setScreen={setScreen} />
);
break;
case screens.JOIN:
content = (
<JoinScreen roomId={roomId} screens={screens} setScreen={setScreen} />
);
break;
default:
content = <Text>Wrong Screen</Text>;
}
return (
<SafeAreaView className="flex-1 justify-center ">{content}</SafeAreaView>
);
}
RoomScreen.js
On the room screen, we have created a function called generateRandomId() that generates a random room ID in the input field. However, you can still edit it as needed.
//RoomScreen.js
import React, { useEffect, useState } from "react";
import { Text, View, TextInput, TouchableOpacity, Alert } from "react-native";
import { db } from "../firebase";
import {
addDoc,
collection,
doc,
setDoc,
getDoc,
updateDoc,
onSnapshot,
deleteField,
} from "firebase/firestore";
export default function RoomScreen({ setScreen, screens, setRoomId, roomId }) {
const onCallOrJoin = (screen) => {
if (roomId.length > 0) {
setScreen(screen);
}
};
//generate random room id
useEffect(() => {
const generateRandomId = () => {
const characters = "abcdefghijklmnopqrstuvwxyz";
let result = "";
for (let i = 0; i < 7; i++) {
const randomIndex = Math.floor(Math.random() * characters.length);
result += characters.charAt(randomIndex);
}
return setRoomId(result);
};
generateRandomId();
}, []);
//checks if room is existing
const checkMeeting = async () => {
if (roomId) {
const roomRef = doc(db, "room", roomId);
const roomSnapshot = await getDoc(roomRef);
// console.log(roomSnapshot.data());
if (!roomSnapshot.exists() || roomId === "") {
// console.log(`Room ${roomId} does not exist.`);
Alert.alert("Wait for your instructor to start the meeting.");
return;
} else {
onCallOrJoin(screens.JOIN);
}
} else {
Alert.alert("Provide a valid Room ID.");
}
};
return (
<View>
<Text className="text-2xl font-bold text-center">Enter Room ID:</Text>
<TextInput
className="bg-white border-sky-600 border-2 mx-5 my-3 p-2 rounded-md"
value={roomId}
onChangeText={setRoomId}
/>
<View className="gap-y-3 mx-5 mt-2">
<TouchableOpacity
className="bg-sky-300 p-2 rounded-md"
onPress={() => onCallOrJoin(screens.CALL)}
>
<Text className="color-black text-center text-xl font-bold ">
Start meeting
</Text>
</TouchableOpacity>
<TouchableOpacity
className="bg-sky-300 p-2 rounded-md"
onPress={() => checkMeeting()}
>
<Text className="color-black text-center text-xl font-bold ">
Join meeting
</Text>
</TouchableOpacity>
</View>
</View>
);
}
The checkMeeting function verifies the existence of a meeting room in a Firestore database. It takes a roomId as input and performs asynchronous operations to fetch data from the database. If the roomId is invalid or the room does not exist, it displays an alert message prompting the user to wait for the instructor to start the meeting. Otherwise, it triggers the onCallOrJoin function with the argument screens.JOIN, presumably to join the meeting.
CallScreen.js
Once the user clicked the Start Meeting button, it will take them to the Call Screen, which automatically establishes a connection to the database.
//CallScreen.js
import React, { useState, useEffect } from "react";
import { View } from "react-native";
import {
RTCPeerConnection,
RTCView,
mediaDevices,
RTCIceCandidate,
RTCSessionDescription,
MediaStream,
} from "react-native-webrtc";
import { db } from "../firebase";
import {
addDoc,
collection,
doc,
setDoc,
getDoc,
updateDoc,
onSnapshot,
deleteField,
} from "firebase/firestore";
import CallActionBox from "../components/CallActionBox";
const configuration = {
iceServers: [
{
urls: ["stun:stun1.l.google.com:19302", "stun:stun2.l.google.com:19302"],
},
],
iceCandidatePoolSize: 10,
};
export default function CallScreen({ roomId, screens, setScreen }) {
const [localStream, setLocalStream] = useState();
const [remoteStream, setRemoteStream] = useState();
const [cachedLocalPC, setCachedLocalPC] = useState();
const [isMuted, setIsMuted] = useState(false);
const [isOffCam, setIsOffCam] = useState(false);
useEffect(() => {
startLocalStream();
}, []);
useEffect(() => {
if (localStream && roomId) {
startCall(roomId);
}
}, [localStream, roomId]);
//End call button
async function endCall() {
if (cachedLocalPC) {
const senders = cachedLocalPC.getSenders();
senders.forEach((sender) => {
cachedLocalPC.removeTrack(sender);
});
cachedLocalPC.close();
}
const roomRef = doc(db, "room", roomId);
await updateDoc(roomRef, { answer: deleteField() });
setLocalStream();
setRemoteStream(); // set remoteStream to null or empty when callee leaves the call
setCachedLocalPC();
// cleanup
setScreen(screens.ROOM); //go back to room screen
}
//start local webcam on your device
const startLocalStream = async () => {
// isFront will determine if the initial camera should face user or environment
const isFront = true;
const devices = await mediaDevices.enumerateDevices();
const facing = isFront ? "front" : "environment";
const videoSourceId = devices.find(
(device) => device.kind === "videoinput" && device.facing === facing
);
const facingMode = isFront ? "user" : "environment";
const constraints = {
audio: true,
video: {
mandatory: {
minWidth: 500, // Provide your own width, height and frame rate here
minHeight: 300,
minFrameRate: 30,
},
facingMode,
optional: videoSourceId ? [{ sourceId: videoSourceId }] : [],
},
};
const newStream = await mediaDevices.getUserMedia(constraints);
setLocalStream(newStream);
};
const startCall = async (id) => {
const localPC = new RTCPeerConnection(configuration);
localStream.getTracks().forEach((track) => {
localPC.addTrack(track, localStream);
});
const roomRef = doc(db, "room", id);
const callerCandidatesCollection = collection(roomRef, "callerCandidates");
const calleeCandidatesCollection = collection(roomRef, "calleeCandidates");
localPC.addEventListener("icecandidate", (e) => {
if (!e.candidate) {
console.log("Got final candidate!");
return;
}
addDoc(callerCandidatesCollection, e.candidate.toJSON());
});
localPC.ontrack = (e) => {
const newStream = new MediaStream();
e.streams[0].getTracks().forEach((track) => {
newStream.addTrack(track);
});
setRemoteStream(newStream);
};
const offer = await localPC.createOffer();
await localPC.setLocalDescription(offer);
await setDoc(roomRef, { offer, connected: false }, { merge: true });
// Listen for remote answer
onSnapshot(roomRef, (doc) => {
const data = doc.data();
if (!localPC.currentRemoteDescription && data.answer) {
const rtcSessionDescription = new RTCSessionDescription(data.answer);
localPC.setRemoteDescription(rtcSessionDescription);
} else {
setRemoteStream();
}
});
// when answered, add candidate to peer connection
onSnapshot(calleeCandidatesCollection, (snapshot) => {
snapshot.docChanges().forEach((change) => {
if (change.type === "added") {
let data = change.doc.data();
localPC.addIceCandidate(new RTCIceCandidate(data));
}
});
});
setCachedLocalPC(localPC);
};
const switchCamera = () => {
localStream.getVideoTracks().forEach((track) => track._switchCamera());
};
// Mutes the local's outgoing audio
const toggleMute = () => {
if (!remoteStream) {
return;
}
localStream.getAudioTracks().forEach((track) => {
track.enabled = !track.enabled;
setIsMuted(!track.enabled);
});
};
const toggleCamera = () => {
localStream.getVideoTracks().forEach((track) => {
track.enabled = !track.enabled;
setIsOffCam(!isOffCam);
});
};
return (
<View className="flex-1 bg-red-600">
{!remoteStream && (
<RTCView
className="flex-1"
streamURL={localStream && localStream.toURL()}
objectFit={"cover"}
/>
)}
{remoteStream && (
<>
<RTCView
className="flex-1"
streamURL={remoteStream && remoteStream.toURL()}
objectFit={"cover"}
/>
{!isOffCam && (
<RTCView
className="w-32 h-48 absolute right-6 top-8"
streamURL={localStream && localStream.toURL()}
/>
)}
</>
)}
<View className="absolute bottom-0 w-full">
<CallActionBox
switchCamera={switchCamera}
toggleMute={toggleMute}
toggleCamera={toggleCamera}
endCall={endCall}
/>
</View>
</View>
);
}
First, we need to import functions from the WebRTC and Firebase libraries that we are going to use. Then, we create a variable configuration to store the ICE servers.
ICE servers in WebRTC help devices behind firewalls and NATs to discover each other’s network addresses, enabling efficient peer-to-peer communication by using STUN for direct connections and TURN as a fallback relay when direct communication is not possible.
In the first useEffect() call, we invoke the startLocalStream() function to automatically initiate local video streaming on the caller's device.
In the second useEffect() call, we ensure that we have a roomId, and the localStream is not empty.
The endCall() function performs the following actions:
- It checks if there is a cached local peer connection (cachedLocalPC) established during the call. If a local peer connection exists, it removes all tracks associated with it and then closes the connection.
- Next, it updates the “room” document in the database by setting the “answer” field to be deleted, effectively clearing the answer information from the room.
- The function then resets the localStream and remoteStream, ensuring they are empty or set to null, specifically, for the "callee" when they leave the call.
- After cleaning up resources, the cached local peer connection (cachedLocalPC) is reset.
- Finally, the setScreen() function is called to navigate back to the room screen, ending the call and returning the user to the original state.
The startLocalStream() function is responsible for capturing the local video stream from the user's device camera. Here's a brief explanation of how it works:
- It enumerates the available media devices about the available video sources.
- The function defines constraints for the video stream, such as minimum width, minimum height, and minimum frame rate.
- The getUserMedia function is called with the defined constraints, which prompts the user for camera access permission and returns the local video stream.
- Finally, the obtained video stream is set as the localStream, making it available for further use in the application.
The startCall() function initiates a call by performing the following steps:
- A new local PeerConnection (localPC) is created.
- The local video stream (localStream) is added to the localPC to enable local media sharing during the call.
- The function sets up a connection to the Firestore database to handle call-related data.
- An event listener is added to the localPC to handle ICE candidate events, and any generated candidates are added to the Firestore collection for "callerCandidates."
- The function listens for the ontrack event on the localPC, which indicates that the remote peer has started streaming their media. It then creates a new MediaStream to store the incoming stream and sets it as the remoteStream.
- An offer is created using the createOffer method of localPC, and the local description is set accordingly.
- The offer, along with the “connected” status set to false, is stored in the Firestore room document using the setDoc method.
- The function listens for any updates in the Firestore room document (e.g., when the remote peer provides an answer).
- If an answer is received from the remote peer, it is processed, and the remote description is set on the localPC to establish the connection between the peers.
- Additionally, any ICE candidates added by the callee are retrieved from the Firestore collection for “calleeCandidates” and added to the localPC.
- Finally, the localPC is stored in a cached state variable using setCachedLocalPC.
The switchCamera function allows users to toggle between front and back cameras during a video call
The toggleMute function allows users to mute and unmute their audio during a video call.
The toggleCamera function allows users to turn their camera on and off during a video call.
JoinScreen.js
If the user clicks the Join Meeting button, it will automatically connect them to the caller’s meeting if they have the same room ID. If the room ID the callee tries to join does not exist, a prompt will notify them that the room ID they entered does not exist.
//JoinScreen.js
import React, { useState, useEffect } from "react";
import { Text, StyleSheet, Button, View } from "react-native";
import {
RTCPeerConnection,
RTCView,
mediaDevices,
RTCIceCandidate,
RTCSessionDescription,
MediaStream,
} from "react-native-webrtc";
import { db } from "../firebase";
import {
addDoc,
collection,
doc,
setDoc,
getDoc,
updateDoc,
onSnapshot,
deleteField,
} from "firebase/firestore";
import CallActionBox from "../components/CallActionBox";
const configuration = {
iceServers: [
{
urls: ["stun:stun1.l.google.com:19302", "stun:stun2.l.google.com:19302"],
},
],
iceCandidatePoolSize: 10,
};
export default function JoinScreen({ roomId, screens, setScreen }) {
const [localStream, setLocalStream] = useState();
const [remoteStream, setRemoteStream] = useState();
const [cachedLocalPC, setCachedLocalPC] = useState();
const [isMuted, setIsMuted] = useState(false);
const [isOffCam, setIsOffCam] = useState(false);
//Automatically start stream
useEffect(() => {
startLocalStream();
}, []);
useEffect(() => {
if (localStream) {
joinCall(roomId);
}
}, [localStream]);
//End call button
async function endCall() {
if (cachedLocalPC) {
const senders = cachedLocalPC.getSenders();
senders.forEach((sender) => {
cachedLocalPC.removeTrack(sender);
});
cachedLocalPC.close();
}
const roomRef = doc(db, "room", roomId);
await updateDoc(roomRef, { answer: deleteField(), connected: false });
setLocalStream();
setRemoteStream(); // set remoteStream to null or empty when callee leaves the call
setCachedLocalPC();
// cleanup
setScreen(screens.ROOM); //go back to room screen
}
//start local webcam on your device
const startLocalStream = async () => {
// isFront will determine if the initial camera should face user or environment
const isFront = true;
const devices = await mediaDevices.enumerateDevices();
const facing = isFront ? "front" : "environment";
const videoSourceId = devices.find(
(device) => device.kind === "videoinput" && device.facing === facing
);
const facingMode = isFront ? "user" : "environment";
const constraints = {
audio: true,
video: {
mandatory: {
minWidth: 500, // Provide your own width, height and frame rate here
minHeight: 300,
minFrameRate: 30,
},
facingMode,
optional: videoSourceId ? [{ sourceId: videoSourceId }] : [],
},
};
const newStream = await mediaDevices.getUserMedia(constraints);
setLocalStream(newStream);
};
//join call function
const joinCall = async (id) => {
const roomRef = doc(db, "room", id);
const roomSnapshot = await getDoc(roomRef);
if (!roomSnapshot.exists) return;
const localPC = new RTCPeerConnection(configuration);
localStream.getTracks().forEach((track) => {
localPC.addTrack(track, localStream);
});
const callerCandidatesCollection = collection(roomRef, "callerCandidates");
const calleeCandidatesCollection = collection(roomRef, "calleeCandidates");
localPC.addEventListener("icecandidate", (e) => {
if (!e.candidate) {
console.log("Got final candidate!");
return;
}
addDoc(calleeCandidatesCollection, e.candidate.toJSON());
});
localPC.ontrack = (e) => {
const newStream = new MediaStream();
e.streams[0].getTracks().forEach((track) => {
newStream.addTrack(track);
});
setRemoteStream(newStream);
};
const offer = roomSnapshot.data().offer;
await localPC.setRemoteDescription(new RTCSessionDescription(offer));
const answer = await localPC.createAnswer();
await localPC.setLocalDescription(answer);
await updateDoc(roomRef, { answer, connected: true }, { merge: true });
onSnapshot(callerCandidatesCollection, (snapshot) => {
snapshot.docChanges().forEach((change) => {
if (change.type === "added") {
let data = change.doc.data();
localPC.addIceCandidate(new RTCIceCandidate(data));
}
});
});
onSnapshot(roomRef, (doc) => {
const data = doc.data();
if (!data.answer) {
setScreen(screens.ROOM);
}
});
setCachedLocalPC(localPC);
};
const switchCamera = () => {
localStream.getVideoTracks().forEach((track) => track._switchCamera());
};
// Mutes the local's outgoing audio
const toggleMute = () => {
if (!remoteStream) {
return;
}
localStream.getAudioTracks().forEach((track) => {
track.enabled = !track.enabled;
setIsMuted(!track.enabled);
});
};
const toggleCamera = () => {
localStream.getVideoTracks().forEach((track) => {
track.enabled = !track.enabled;
setIsOffCam(!isOffCam);
});
};
return (
<View className="flex-1">
<RTCView
className="flex-1"
streamURL={remoteStream && remoteStream.toURL()}
objectFit={"cover"}
/>
{remoteStream && !isOffCam && (
<RTCView
className="w-32 h-48 absolute right-6 top-8"
streamURL={localStream && localStream.toURL()}
/>
)}
<View className="absolute bottom-0 w-full">
<CallActionBox
switchCamera={switchCamera}
toggleMute={toggleMute}
toggleCamera={toggleCamera}
endCall={endCall}
/>
</View>
</View>
);
}
As you can see, joinCall and startCall are pretty much the same but differ in their purposes and functionalities.
joinCall function is used for joining an existing call initiated by a caller.
- It checks if the provided room ID exists in the database (Firestore) using roomSnapshot.exists.
- If the room ID does not exist (!roomSnapshot.exists), it returns and does not proceed with the connection.
- If the room ID exists, the function establishes a local peer connection (localPC) and adds the local video stream to it.
- It sets up event listeners for ICE candidates to exchange signaling data.
- When ICE candidates are gathered, they are added to the Firestore collection for “calleeCandidates.”
- The function listens for changes in the Firestore room document and processes the remote offer received from the caller.
- After processing the offer, it creates and sets the local answer and updates the Firestore room document to indicate a successful connection.
- The function sets the remote stream received from the caller using the setRemoteStream function.
- The localPC is then cached for further use using the setCachedLocalPC function.
Build APK
The default file format used when building Android apps with EAS Build is an Android App Bundle (AAB/.aab). This format is optimized for distribution to the Google Play Store. However, AABs can’t be installed directly on your device. To install a build directly to your Android device or emulator, you need to build an Android Package (APK/.apk) instead. You can follow the instructions on this link to guide you on how to make it an apk file.
Video Demo
So that’s it! Congratulations on completing this tutorial on building a real-time video calling app using React Native Expo, WebRTC, Firebase, and Tailwind! You have learned how to leverage cutting-edge technologies to create a seamless and immersive communication experience for your users.
By now, you have gained valuable insights into setting up WebRTC for real-time peer-to-peer communication, integrating Firebase for user authentication, signaling, and data storage, and even exploring the possibilities of enhancing your app’s UI with Tailwind.
Thank you for joining me on this learning journey. You can find the whole code on this link. Happy coding and best of luck with your future projects!
Special thanks to Dipansh Khandelwal — Medium for inspiring me to create this tutorial based on my version. You can check out his own tutorial for this by following this link.
Hey! I’m Kyle Mendoza, a Computer Engineer and an aspiring full-stack developer. As of writing this tutorial, I have just graduated from college. I have around 4 years of programming experience with a diverse tech stack, including native Android, React Native, React.js, Firebase, SQL, and even hardware languages like Arduino Sketch (C++). I enjoy working on both frontend and backend development.