Build fully custom AI application using top machine learning libraries like PyTorch, Tensorflow, Huggingface, OpenAI, etc. This tool will generate all the frontend and backend code of your AI. Follow the installation instructions to host the AI live on web.
Congrats on generating the LLM code! The next step is to deploy this LLM on web. Follow the below instructions :
Codes generated for local LLMs are mostly in HTML and Python language. Therefore, to create a web deployment, we will utilise a web-based deployment framework called PyScript. This source material will guide you to achieve this goal.
First, create a free account on Pyscript.com and create a new project.
The project comes with 3 files : index.html; main.py & pyscript.toml.
Every AI/LLM system will require you to install several libraries AKA packages into the environment. The information of the packages are given at the end of the code base by Sttabot. For example, if your LLM requires the following package to be installed :
pip install torch flask
You will need to go to the pyscript.toml file and place the following code there :
package = ["flask" , "torch"]
Finally, copy the Python (backend) code generated from Sttabot and paste it in the app.py file.
Now click on Save & Run button and your chatbot is live. Copy the URL and share it with anyone.