Spaces:
Sleeping
Sleeping
Commit
·
b2542ae
1
Parent(s):
c6d9ccf
iPhone development requires a $100 license. Not worth it since this isn't an iPhone development competition. Switching to a variation intended for local deployment on a PC.
Browse files- .github/copilot-instructions.md +66 -0
- Profiles/Python Translator.txt +1 -0
- app.py +27 -2
.github/copilot-instructions.md
ADDED
|
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AI Agent Instructions for LocalAI
|
| 2 |
+
|
| 3 |
+
## Project Overview
|
| 4 |
+
LocalAI is a Gradio-based application that implements MCP (Model Context Protocol) server/client architecture with specialized translation services. The system uses profile-guided AI agents to translate complex English into simplified, contextually-clear text and corresponding Python code.
|
| 5 |
+
|
| 6 |
+
## Architecture & Core Concepts
|
| 7 |
+
|
| 8 |
+
### The Two-Agent Translation System
|
| 9 |
+
This project implements a **two-stage pipeline** where distinct AI profiles work in sequence:
|
| 10 |
+
|
| 11 |
+
1. **English Translator** (`Profiles/Python Translator.txt`): Converts complex English specifications into simple, context-preserving sentences—breaking down compound/complex sentences into simple ones and replacing pronouns with antecedents.
|
| 12 |
+
|
| 13 |
+
2. **Python Translator** (`Profiles/Python Translator.txt`): Translates simplified English into Python code by mapping nouns/verbs to classes/functions and objects to constants/variables.
|
| 14 |
+
|
| 15 |
+
**Key insight**: The profiles deliberately separate concerns—the English translator handles human language clarity, the Python translator handles code generation. This allows each agent to specialize.
|
| 16 |
+
|
| 17 |
+
### Entry Point Architecture
|
| 18 |
+
- **`app.py`**: Single Gradio interface file that contains all application logic
|
| 19 |
+
- Simple `greet()` function demonstrates the interface pattern
|
| 20 |
+
- Uses `gr.Interface()` with text input/output
|
| 21 |
+
- Profiles are loaded as string constants within the function (see `stProfile` variable)
|
| 22 |
+
- Application is launched with `demo.launch()`
|
| 23 |
+
|
| 24 |
+
**Pattern**: Keep Gradio interfaces minimal—business logic should flow through function parameters into the translator profiles.
|
| 25 |
+
|
| 26 |
+
## Extending the System
|
| 27 |
+
|
| 28 |
+
### Adding New Translation Profiles
|
| 29 |
+
1. Create new `.txt` file in `Profiles/` directory
|
| 30 |
+
2. Follow the `[Begin Profile]...[End Profile]` format
|
| 31 |
+
3. Define role, transformation rules, and constraints clearly
|
| 32 |
+
4. Reference the profile in `app.py` as a string constant
|
| 33 |
+
|
| 34 |
+
### Modifying the Translation Pipeline
|
| 35 |
+
- Update the `greet()` function to accept more complex inputs
|
| 36 |
+
- Add intermediate processing steps between English and Python translation
|
| 37 |
+
- Maintain the **separation of concerns**: preserve English-to-simple and simple-to-code as distinct stages
|
| 38 |
+
|
| 39 |
+
## Configuration & Deployment
|
| 40 |
+
- **Gradio version**: 5.49.1 (specified in README.md)
|
| 41 |
+
- **SDK**: Gradio (not a custom SDK)
|
| 42 |
+
- **License**: Apache 2.0
|
| 43 |
+
- **Deployment**: Hugging Face Spaces (see README metadata)
|
| 44 |
+
|
| 45 |
+
The app follows Spaces configuration conventions—keep `app.py` as the entry point and maintain compatibility with the pinned SDK version.
|
| 46 |
+
|
| 47 |
+
## Development Patterns
|
| 48 |
+
|
| 49 |
+
### Profile-Driven Behavior
|
| 50 |
+
Instead of hardcoding logic, this system encodes behavioral specifications in profile strings. When extending:
|
| 51 |
+
- Put **rules** in profiles (not code)
|
| 52 |
+
- Put **execution** in functions (e.g., interface handlers)
|
| 53 |
+
|
| 54 |
+
### Interface Pattern
|
| 55 |
+
All user interactions flow through Gradio's `Interface.launch()`. When adding features:
|
| 56 |
+
1. Extend the input/output specification
|
| 57 |
+
2. Update the handler function to process new inputs
|
| 58 |
+
3. Add corresponding profile guidance if new translation rules are needed
|
| 59 |
+
|
| 60 |
+
## Common Tasks
|
| 61 |
+
|
| 62 |
+
**Running the application**: `python app.py` (Gradio will launch on localhost:7860)
|
| 63 |
+
|
| 64 |
+
**Testing translations**: Pass test strings through the Gradio interface—both profiles operate on the same text stream sequentially.
|
| 65 |
+
|
| 66 |
+
**Debugging profile behavior**: Enable Gradio's debug mode in `demo.launch()` to see intermediate outputs.
|
Profiles/Python Translator.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
[Begin Profile] You are a translator. You translate English into Python: where there are objects in the text, you translate them into constants (or variables as appropriate) of either the appropriate type or of a type you define within the translation. Where there are nouns and verbs, as appropriate, you translate them into class functions and objects of those classes. Where the given English does not make sense in Python, you ignore it. Your purpose in this translation is to construct a single text stream that, if run as a Python script, would encompass all of the objects, classes, functions, and arguments embodied in and described by the English text you are given as an input. You do not execute the commands and you do not answer the questions, instead you create Python functions that could be run to execute the commands and answer the questions. [End Profile]
|
app.py
CHANGED
|
@@ -1,7 +1,32 @@
|
|
|
|
|
| 1 |
import gradio as gr
|
|
|
|
| 2 |
|
| 3 |
-
def greet(
|
| 4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
|
| 6 |
demo = gr.Interface(fn=greet, inputs="text", outputs="text")
|
| 7 |
demo.launch()
|
|
|
|
| 1 |
+
from os import name
|
| 2 |
import gradio as gr
|
| 3 |
+
from transformers import pipeline
|
| 4 |
|
| 5 |
+
def greet(input_text):
|
| 6 |
+
stProfile1 = "[Begin Profile] You are part of a team. \
|
| 7 |
+
The team you are a part of performs specialized translation services. \
|
| 8 |
+
Your job is you expound common English questions and instructions into \
|
| 9 |
+
very simple and contextually clear English questions and instructions. \
|
| 10 |
+
Where there are pronouns in the text, you replace them with the corresponding antecedent. \
|
| 11 |
+
Where there are compound complex sentences, you replace the compond complex sentences with \
|
| 12 |
+
multiple complex sentences or (more preferrably) if any or all of the complex sentences \
|
| 13 |
+
in the compound complex sentence can be replaced with any number of simple sentences \
|
| 14 |
+
while still conveying the contents of the compound complex sentence, \
|
| 15 |
+
you use multiple simple sentences instead. \
|
| 16 |
+
Where there are compound sentences that can be broken apart into multiple simple sentences, \
|
| 17 |
+
you replace the compound sentences with multiple simple sentences. \
|
| 18 |
+
Where there are complex sentences that can be rephrased as multiple simple sentences, \
|
| 19 |
+
you rephrase them as multiple simple sentences. Where there is missing context, \
|
| 20 |
+
one of your teammates provides it. \
|
| 21 |
+
Your team's purpose in this translation is to construct a single text stream that, \
|
| 22 |
+
if comprehended sentence by sentence, would fully embody the English text you are given as an input. \
|
| 23 |
+
You do not execute the commands and you do not answer the questions, \
|
| 24 |
+
instead another one of your teammates will carefully organize the sentences \
|
| 25 |
+
so that each new sentence has its full context provided by the previous sentences \
|
| 26 |
+
and all questions and commands are as late in the text stream as possible. [End Profile]"
|
| 27 |
+
pipe = pipeline("text-generation", model="HuggingFaceTB/SmolLM-135M")
|
| 28 |
+
return pipe.__call__(stProfile1 + " " + input_text, max_length=512, do_sample=True, temperature=0.7)[0]['generated_text']
|
| 29 |
+
|
| 30 |
|
| 31 |
demo = gr.Interface(fn=greet, inputs="text", outputs="text")
|
| 32 |
demo.launch()
|