The 3Dify project aims to enable avatar creation using advanced AI and a modular software architecture. Such a modular architecture allows for flexible and scalable development, ensuring easy updates and improvements.
3dify allows users to create fully-animated 3D avatars by uploading a single picture of a face. Using AI, 3dify scans the face from the input photo, extracts its features and uses the MakeHuman avatar generation suite to create 3D animated avatars based on the facial features extracted from the 3dify software modules.
3dify consists of 2 web applications.
Example of a generated avatar:
The front end provides users with a gallery of pictures they have uploaded to generate their avatars. The gallery is proposed as a grid of photos to make it look familiar to users accustomed to picture galleries installed on their smartphones. Alongside the gallery, users are provided with a box to upload their pictures through drag-and-drop and selection from their computer.
Consider that a user uploads a picture either way; the platform shows users the update’s progress to keep them informed about what is happening to avoid refreshes or other actions that could only worsen the user experience, even though loading a picture does not take much time. When the loading is complete, users can access the picture from the gallery, and by clicking on it, they can preview the picture and gain access to several features. Below the preview, they have an action bar with options such as zoom-in, zoom-out, flip horizontally and vertically, rotate in both directions and the most important one–the customization option, which is dedicated to avatar customization and rendering using the Unity-based front end.
The application does not support logging in yet, but it is already designed with the capabilities to do so, and this is why users can already see buttons for logging in and out. This functionality will be enabled in future versions using the MongoDB database.
The WebGL front-end, developed using the Unity game engine, allows users to preview an initial version of their avatar based on the image uploaded to the web application described above.
After initial facial feature inference and avatar generation, the application displays a high-fidelity rendering of the fully animated avatar. This avatar includes a mesh with attached materials and textures, as well as a skeleton for use in applications such as XR and video games.
If the user is not satisfied with the initial results, the application offers extensive customization of facial features, including the head, eyes, nose, hair, and other details, using the panel on the left.
Customization is done by adjusting position and size values using sliders or by selecting from graphical options (eyes, hair, etc.).
By pressing the Build button in the lower left corner, the user initiates the avatar generation pipeline. This process, which takes more than 10 seconds, sends the new face parameters to the backend services to generate a modified version of the avatar.
The application front end architecture comprises a web application, a file store, and a NoSQL database.
web application
is built with the Next.js framework, which allows us to develop both the front-end and back-end using TypeScript.
The front-end is designed as a Single-Page Application (SPA) and written in React, also taking advantage of the abstractions offered by Next.js.
Another benefit of Next.js is the possibility to use Next.js API Routes to create a serverless back-end to optimize resource utilization.file store
for persisting uploaded photos, generated avatars, etc.database
, the application leverages MongoDB capabilities for storing users’ information and more.WebGL application
is built with the Unity game engine and the C# language and lets users customize and preview the avatar and download its 3D model.avatar generation
is handled by a Makehuman process running in background.The application consists of seven docker containers:
The deployment of these containers is coordinated by the Docker Compose configuration file.
In order to use the authentication process, it is necessary to obtain a Google Client ID and a Client Secret from a Google Cloud instance and putting them inside the Docker Compose file. For the retrieval of informations, you can follow a guide at the following link.
Alternatively, if the authentication module is not needed, you can switch to the main-no-auth branch and build the application with the Docker Compose from there.
For executing on Windows and MacOS Systems it is necessary to install beforehand Docker Desktop at the following link.
If when opening Docker Desktop for the first time on Windows machine should appear an error mentioning WSL try to open the command prompt and type:
wsl --update
(DISCLAIMER: Currently on Apple Silicon Processors the software may present some slow down due to the translation layer from x64 to ARM.)
For executing on Linux Systems it is just necessary to install Docker Engine by following the guide for your distro at the following link. (ATTENTION Currently this version only work by using Docker Engine with sudo command and is not compatible with Docker Desktop for linux systems.)
Download the Docker Compose file at https://github.com/isislab-unisa/3dify/blob/main/docker-compose.yml for the version with authentication, otherwise download the Docker Compose file here.
Launch all the containers required to run the application:
docker compose up -d
Stop all the containers of the application:
docker compose down
If the application is correctly deployed, it can be run by default at the link http://localhost:3000/.
Follow the same instruction as specified above.
Get the code at the repo https://github.com/isislab-unisa/3dify/tree/main for the version with authentication, otherwise get the code from this repo.
Launch all the containers required to run the application in development mode:
docker compose -f dev.docker-compose.yml up -d
Monitor the application while developing to see your changes reflecting automatically:
docker compose -f dev.docker-compose.yml watch
Stop all the containers of the application:
docker compose -f dev.docker-compose.yml down
If the application is correctly deployed, it can be run by default at the link http://localhost:3000/.
/app/components: in this folder you will find the code for the UI elements of the application front end, such as the photos gallery.
In order to proceed with the execution of the tests suite, it is necessary to:
conda create -n makehuman && conda activate makehuman
pip install -r requirements.txt