MyGO is an LLM API that uses caching and KV-Database to improve response speed, similar to Gemini and Kimi.
In a typical AI workflow, you might pass the same input Tokens to the model over and over again. With the MyGO APIโs context caching feature, you can request text multiple times and only need to access the model once. By caching the input Tokens, calculating similarity using Cosine similarity, and referencing cached Tokens for subsequent requests, MyGO reduces costs and latency by avoiding the repeated processing of identical input data.
Check out Kimi for a practical example of how MyGOโs concepts are applied.
Cosine similarity measures the similarity between two vectors by calculating the cosine of the angle between them. The cosine value ranges from -1 to 1, where:
This measure is widely used in positive space, where all values are non-negative, making it particularly useful for comparing textual data in natural language processing tasks.
To get started with MyGO, follow these steps:
git clone https://github.com/Chihaya-Yuka/mygo.git
cd mygo
go run main.go
For more detailed instructions, see the Documentation section.
If you want to build MyGO from source, follow these steps:
git clone https://github.com/Chihaya-Yuka/mygo.git
cd mygo
go build -o mygo main.go
./mygo
This will start the MyGO service on your local machine.
To launch the MyGO service from the source, simply run:
go run main.go
This will start the service, which you can then interact with via HTTP requests.
For more detailed documentation, including API references and advanced configuration options, visit the MyGO Documentation.
Join our community to share your experiences, ask questions, and collaborate with others:
We welcome contributions! If youโd like to contribute to MyGO, please:
git checkout -b feature-branch
).git commit -m 'Add new feature'
).git push origin feature-branch
).