-
Install PDM (only if not already installed)
pip install pdm
-
Install package locally (will download all dev packages and create a local venv)
# Optional if you want to create venv in a specific version. More info: https://pdm.fming.dev/latest/usage/venv/#create-a-virtualenv-yourself pdm venv create 3.10 # tested version pdm install
-
There is some problem with kaleido, you might have to install it with pip if you need it.
To see workflow process use notebooks from workflow_raster. It also contains less memory consuming training procedure.
- specify configs in config.py (dirs inside container, base model, batch size, epoch, n_columns used to generate prompt, dataset)
- dataset can be located in data/tiles (specify in config data/tiles) or huggingface dataset (like mprzymus/osm_tiles_small which contains tiles from Wrocław)
- tiles can be downloaded from https://drive.google.com/drive/u/0/folders/1rIsECcHQNBj60905VhGzPUkeQuaBKgmI
- you can specify more arguments in run.sh according to https://huggingface.co/docs/diffusers/training/text2image
docker build --tag diffusion .
docker run -it --gpus all -v ./logs_cont:/app/result_path diffusion
- to monitor logs run
tensorboard --logdir logs_cont/logs
- after training model is located inside log_cont directory
Demo app is available! To run it with existing model use:
pdm sync -G prod
streamlit run map_generation/app/app.py
You can also use existing docker image:
docker pull mprzymus/diffusion_demo
docker run --gpus 1 -p 6661:5000 mprzymus/diffusion_demo