Skip to the content.

NekoImageGallery

GitHub Workflow Status (with event) codecov Maintainability Man hours Docker Pulls

An online AI image search engine based on the Clip model and Qdrant vector database. Supports keyword search and similar image search.

中文文档

✨ Features

📷Screenshots

Screenshot1 Screenshot2 Screenshot3 Screenshot4 Screenshot5 Screenshot6

The above screenshots may contain copyrighted images from different artists, please do not use them for other purposes.

✈️ Deployment

🖥️ Local Deployment

Choose a metadata storage method

In most cases, we recommend using the Qdrant database to store metadata. The Qdrant database provides efficient retrieval performance, flexible scalability, and better data security.

Please deploy the Qdrant database according to the Qdrant documentation. It is recommended to use Docker for deployment.

If you don’t want to deploy Qdrant yourself, you can use the online service provided by Qdrant.

Local File Storage

Local file storage directly stores image metadata (including feature vectors, etc.) in a local SQLite database. It is only recommended for small-scale deployments or development deployments.

Local file storage does not require an additional database deployment process, but has the following disadvantages:

Deploy NekoImageGallery

  1. Clone the project directory to your own PC or server, then checkout to a specific version tag (like v1.0.0).
  2. It is highly recommended to install the dependencies required for this project in a Python venv virtual environment. Run the following command:
     python -m venv .venv
     . .venv/bin/activate
    
  3. Install PyTorch. Follow the PyTorch documentation to install the torch version suitable for your system using pip.

    If you want to use CUDA acceleration for inference, be sure to install a CUDA-supported PyTorch version in this step. After installation, you can use torch.cuda.is_available() to confirm whether CUDA is available.

  4. Install other dependencies required for this project:
     pip install -r requirements.txt
    
  5. Modify the project configuration file inside config/, you can edit default.env directly, but it’s recommended to create a new file named local.env and override the configuration in default.env.
  6. Run this application:
     python main.py
    

    You can use --host to specify the IP address you want to bind to (default is 0.0.0.0) and --port to specify the port you want to bind to (default is 8000).
    You can see all available commands and options by running python main.py --help.

  7. (Optional) Deploy the front-end application: NekoImageGallery.App is a simple web front-end application for this project. If you want to deploy it, please refer to its deployment documentation.

🐋 Docker Deployment

About docker images

NekoImageGallery’s docker image are built and released on Docker Hub, including serval variants:

Tags Description Latest Image Size
edgeneko/neko-image-gallery:<version>
edgeneko/neko-image-gallery:<version>-cuda
edgeneko/neko-image-gallery:<version>-cuda12.1
Supports GPU inferencing with CUDA12.1 Docker Image Size (tag)
edgeneko/neko-image-gallery:<version>-cuda11.8 Supports GPU inferencing with CUDA11.8 Docker Image Size (tag)
edgeneko/neko-image-gallery:<version>-cpu Only supports CPU inferencing Docker Image Size (tag)

Where <version> is the version number or version alias of NekoImageGallery, as follows:

Version Description
latest The latest stable version of NekoImageGallery
v*.*.* / v*.* The specific version number (correspond to Git tags)
edge The latest development version of NekoImageGallery, may contain unstable features and breaking changes

In each image, we have bundled the necessary dependencies, openai/clip-vit-large-patch14 model weights, bert-base-chinese model weights and easy-paddle-ocr models to provide a complete and ready-to-use image.

The images uses /opt/NekoImageGallery/static as volume to store image files, mount it to your own volume or directory if local storage is required.

For configuration, we suggest using environment variables to override the default configuration. Secrets (like API tokens) can be provided by docker secrets.

Prepare nvidia-container-runtime (CUDA users only)

If you want to use CUDA acceleration, you need to install nvidia-container-runtime on your system. Please refer to the official documentation for installation.

Related Document:

  1. https://docs.docker.com/config/containers/resource_constraints/#gpu
  2. https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker
  3. https://nvidia.github.io/nvidia-container-runtime/

Run the server

  1. Download the docker-compose.yml file from repository.
    # For cuda deployment (default)
    wget https://raw.githubusercontent.com/hv0905/NekoImageGallery/master/docker-compose.yml
    # For CPU-only deployment
    wget https://raw.githubusercontent.com/hv0905/NekoImageGallery/master/docker-compose-cpu.yml && mv docker-compose-cpu.yml docker-compose.yml
    
  2. Modify the docker-compose.yml file as needed
  3. Run the following command to start the server:
     # start in foreground
     docker compose up
     # start in background(detached mode)
     docker compose up -d
    

Upload images to NekoImageGallery

There are serval ways to upload images to NekoImageGallery

📚 API Documentation

The API documentation is provided by FastAPI’s built-in Swagger UI. You can access the API documentation by visiting the /docs or /redoc path of the server.

Those project works with NekoImageGallery :D

NekoImageGallery.App LiteLoaderQQNT-NekoImageGallerySearch nonebot-plugin-nekoimage

📊 Repository Summary

Alt

♥ Contributing

There are many ways to contribute to the project: logging bugs, submitting pull requests, reporting issues, and creating suggestions.

Even if you with push access on the repository, you should create a personal feature branches when you need them. This keeps the main repository clean and your workflow cruft out of sight.

We’re also interested in your feedback on the future of this project. You can submit a suggestion or feature request through the issue tracker. To make this process more effective, we’re asking that these include more information to help define them more clearly.

Copyright 2023 EdgeNeko

Licensed under AGPLv3 license.