I just tried deploying locally with docker, following your documentation at Self-Hosting with Docker | TaskingAI. I made no modification after cloning the repository and then run docker-compose -p taskingai up -d. Building was successful and all 8 containers were started. I can also see the 8 containers running in the docker client.
The frontend was up. But an error was reported when I tried to login:
Please offer your help. Thanks!
Btw, I did nothing else besides cloning the repository and run docker-composer. That means I didn’t install postgre on my machine but I supposed it should automatically be installed when building with docker, am I right?
Thanks for your kind reply. Somehow looks like nginx wasn’t installed and started automatically after I finished build and deploy the project with Docker.
I’m on Mac without nginx installed before the deployment.
What can I do now? Should I manually install nginx and then deploy and run the project again?
Or should I make some changes to this configuration in docker-compose.yml?
I can’t be certain, but we were unable to reproduce the issue on multiple machines. You can try using a different machine or creating a new data-isolated environment for testing.
Additionally, the relevant dependencies are installed within your Docker container during the build process, so you won’t be able to access the nginx installation information locally.
I tried another several times restarting all containers in docker on the same machine and it finally worked, for unknown reason.
But I didn’t see gpt-4o model in the openai model list. Why?
I’m glad you solved this problem! If you log into the TaskingAI cloud platform, you’ll find that we’ve already integrated GPT-4-o. The community version has not yet been updated, but we expect to release a major update to the community version within this week, which will include the integration of GPT-4-o. Stay tuned!