A revolutionary AI system that learns and evolves from the internet daily, becoming smarter with each interaction.
StillMe is a Self-Evolving AI System that continuously learns from the internet, adapts to new information, and evolves through different developmental stages - just like a growing organism. Unlike traditional AI systems that remain static, StillMe gets smarter every day.
- 𧬠Evolutionary Learning: AI progresses through stages (Infant β Child β Adolescent β Adult)
- π Multi-Source Learning: RSS feeds + Public APIs integration
- π Real-time Data: Live data from multiple trusted sources with transparency
- π‘οΈ Ethical Filtering: Comprehensive ethical content filtering with complete transparency
- π Transparent Dashboard: Complete visibility into all learning sources and data
- π¬ Interactive Chat: Communicate with your evolving AI assistant
- Every line of code is public - no "black box", no proprietary algorithms
- Every API call is visible - see exactly what AI learns from and when
- Every decision is transparent - from ethical filtering to quality assessment
- Complete audit trail - full history of all learning decisions and violations
We believe that ethics isn't a feature - it's the foundation. StillMe is built with unwavering principles:
- Safety First: Harmful content filtered at the source
- Cultural Fairness: Respects global diversity and perspectives
- Full Accountability: Every mistake is public and corrected
- Community Control: You decide what's acceptable, not corporations
"We challenge the AI community to choose: Support transparency and ethics, or remain silent and admit they don't care."
- No personal data collection - learns only from public sources
- Self-hosted codebase - you maintain complete control over your data
- Delete anytime - your data, your rules, your control
StillMe features the world's first completely transparent ethical filtering system:
- Complete Visibility: All ethical violations are logged and visible
- Open Source: Filtering rules and algorithms are publicly available
- Community Driven: Blacklist and rules can be managed by the community
- Audit Trail: Full history of all ethical decisions and violations
- Configurable: Ethics level can be adjusted based on community needs
This transparency ensures StillMe learns responsibly while maintaining community trust.
# Clone repository
git clone https://github.com/anhmtk/stillme_ai_ipc.git
cd stillme_ai_ipc
# Install dependencies
pip install -r requirements.txt
# Start backend
python start_backend.py
# Start frontend (new terminal)
python start_frontend.py- Evolution Panel: Real-time AI stage and progress tracking
- Ethical Filter: Complete transparency into ethical decisions
- Learning Analytics: Historical progress with flexible timeline analysis
- Community Controls: Manage ethical rules and blacklist
- Raw Data Access: View actual API responses for verification
StillMe progresses through distinct developmental stages:
- Basic pattern recognition
- Simple content categorization
- High safety focus
- Manual approval required
- Improved content understanding
- Basic reasoning capabilities
- Selective auto-approval
- Enhanced safety protocols
- Advanced reasoning
- Context awareness
- Smart auto-approval
- Balanced learning approach
- Sophisticated understanding
- Complex reasoning
- Autonomous learning
- Expert-level knowledge
StillMe aims to become a fully autonomous learning AI:
- Self-Assessment: Knows what it knows and what it doesn't
- Proactive Learning: Actively seeks new knowledge sources
- Self-Optimization: Adjusts learning process based on effectiveness
- Autonomous Review: Gradually reduces human dependency as trust builds
We open these questions to the community:
- AI Self-Coding? - Should StillMe learn to debug and improve itself?
- Red Team vs Blue Team? - AI attacking and defending itself for enhanced security?
- Multi-Agent Collaboration? - Multiple StillMe instances collaborating on complex problems?
- Cross-Domain Learning? - Expanding from AI to medicine, science, and other fields?
"This isn't our roadmap - it's a community discussion. What direction do you want AI's future to take?"
- Learning Engine: Core evolutionary learning system
- RSS Pipeline: Multi-source content fetching
- Ethical Filter: Comprehensive safety system
- Memory Management: Advanced knowledge storage
- API Integration: Public APIs for diverse content
- Dashboard: Real-time monitoring and control
- Evolution Panel: AI stage visualization
- Ethical Controls: Community management tools
- Analytics: Historical learning data
- Chat Interface: Interactive AI communication
- Learning Sessions: Track AI evolution progress
- Content Proposals: Store learning opportunities
- Memory Items: Advanced knowledge storage
- Ethical Violations: Complete audit trail
StillMe learns from diverse, trusted sources:
- Hacker News, Reddit, GitHub
- TechCrunch, ArXiv, Stack Overflow
- Medium, Academic sources
- News outlets, Subreddits
- NewsAPI, GNews
- Weather, Finance data
- Translation services
- Image understanding APIs
StillMe features a comprehensive ethical content filtering system that ensures responsible AI learning:
- Beneficence: Content must benefit learning and users
- Non-Maleficence: Blocks harmful, toxic, or dangerous content
- Autonomy: Protects privacy and personal information
- Justice: Prevents biased or discriminatory content
- Transparency: Complete visibility into all filtering decisions
- Accountability: Full audit trail of ethical violations
- Input Filtering: Blocks harmful content at the source (RSS/API)
- Content Analysis: Detects toxicity, bias, and sensitive topics
- PII Protection: Automatically identifies and blocks personal information
- Source Validation: Flags unreliable or suspicious sources
- Real-time Monitoring: Continuous ethical compliance checking
- Violation Logging: Complete history of all ethical violations
- Dashboard Integration: Real-time ethical metrics and statistics
- Community Management: Blacklist keywords and rules can be managed
- Audit Trail: Full transparency into all ethical decisions
- API Access: Programmatic access to ethical statistics and controls
# Copy environment template
cp env.example .env
# Edit with your API keys
DEEPSEEK_API_KEY=sk-REPLACE_ME
OPENAI_API_KEY=sk-REPLACE_ME
ANTHROPIC_API_KEY=sk-REPLACE_ME
# Learning Configuration
MAX_DAILY_PROPOSALS=50
AUTO_APPROVAL_THRESHOLD=0.8
LEARNING_SESSION_HOUR=9
# Notification Configuration
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USERNAME=your_email@gmail.com
SMTP_PASSWORD=REPLACE_ME_WITH_YOUR_APP_PASSWORD
TELEGRAM_BOT_TOKEN=REPLACE_ME_WITH_YOUR_BOT_TOKEN
TELEGRAM_CHAT_ID=your_chat_id
# Notification Settings
NOTIFY_LEARNING=true
NOTIFY_ERRORS=trueGET /api/learning/sessions- Get learning sessionsPOST /api/learning/sessions/run- Trigger learning sessionGET /api/learning/evolution/stage- Get current AI stageGET /api/learning/stats- Get learning statistics
GET /api/learning/proposals- Get learning proposalsPOST /api/learning/proposals/{id}/approve- Approve proposalPOST /api/learning/proposals/{id}/reject- Reject proposalGET /api/learning/rss/pipeline-stats- Get RSS pipeline statsPOST /api/learning/rss/fetch-content- Fetch content manually
GET /api/learning/ethics/stats- Get ethical filter statisticsPOST /api/learning/ethics/check-content- Test content for ethical complianceGET /api/learning/ethics/violations- Get ethical violation historyPOST /api/learning/ethics/clear-violations- Clear violation logPOST /api/learning/ethics/add-blacklist-keyword- Add keyword to blacklistGET /api/learning/ethics/blacklist-keywords- Get current blacklist
GET /api/learning/knowledge/stats- Get knowledge consolidation statsPOST /api/learning/knowledge/consolidate- Trigger knowledge consolidationGET /api/learning/memory/stats- Get advanced memory management statsPOST /api/learning/memory/optimize- Optimize memory system
GET /api/learning/analytics/historical- Get historical learning dataGET /api/learning/analytics/comparison- Compare learning periodsGET /api/learning/analytics/trends- Get learning trends analysis
We welcome contributions! See CONTRIBUTING.md for details.
- UI/UX Improvements: Dashboard enhancements, mobile responsiveness
- Learning Sources: Add new RSS feeds and API integrations
- Ethical Filtering: Improve safety algorithms and rules
- Documentation: API docs, tutorials, guides
- Testing: Unit tests, integration tests, performance tests
Whether you support or oppose it, StillMe forces you to pay attention:
- If you support: You're helping build the future of transparent AI
- If you oppose: You need to monitor what's happening
- Either way: You can't deny the impact of a 100% open project
- Supporting? - Contribute code, ideas, resources
- Criticizing? - Point out flaws, suggest improvements
- Skeptical? - Check the code, test the system, find vulnerabilities
- Interested? - Follow, share, discuss
"In a world of AI 'black boxes', our transparency is our strongest weapon."
This project is maintained by passion and community contributions. If you believe in the mission:
- π§ Donate: [Donation link will be announced] - help maintain servers and development
- π§ Contribute: Code, docs, testing, translations - every contribution matters
- π’ Spread The Word: Share with developer and researcher communities
- π― Provide Feedback: Criticize, suggest, propose - help us improve
StillMe isn't just a product - it's a STATEMENT:
- AI must be transparent - no exceptions
- Ethics must be foundational - not an add-on feature
- Community must control - not corporations
- Future must be discussed - not imposed
Join us. Watch us. Critique us. But you can't ignore us.
Because in the darkness of AI "black boxes", our transparency is the light.
MIT License - see LICENSE for details.
StillMe is built with love and dedication to create a truly transparent, ethical AI system. Special thanks to:
- OpenAI for GPT models and API access
- DeepSeek for advanced AI capabilities
- Anthropic for Claude integration
- The Open Source Community for inspiration and support
StillMe - Self-Evolving AI System with Complete Ethical Transparency π€β¨
"The future belongs to AI systems that can learn, adapt, and evolve. StillMe is that future, today."