Skip to content

anhmtk/stillme_ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

152 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧠 StillMe - Self-Evolving AI System

StillMe Logo

A revolutionary AI system that learns and evolves from the internet daily, becoming smarter with each interaction.

Python FastAPI Streamlit Ethical AI License

🌟 What is StillMe?

StillMe is a Self-Evolving AI System that continuously learns from the internet, adapts to new information, and evolves through different developmental stages - just like a growing organism. Unlike traditional AI systems that remain static, StillMe gets smarter every day.

🎯 Core Concept

  • 🧬 Evolutionary Learning: AI progresses through stages (Infant β†’ Child β†’ Adolescent β†’ Adult)
  • πŸ“š Multi-Source Learning: RSS feeds + Public APIs integration
  • 🌐 Real-time Data: Live data from multiple trusted sources with transparency
  • πŸ›‘οΈ Ethical Filtering: Comprehensive ethical content filtering with complete transparency
  • πŸ“Š Transparent Dashboard: Complete visibility into all learning sources and data
  • πŸ’¬ Interactive Chat: Communicate with your evolving AI assistant

πŸ›‘οΈ Our Uncompromising Commitment

🌟 100% Transparency - Nothing to Hide

  • Every line of code is public - no "black box", no proprietary algorithms
  • Every API call is visible - see exactly what AI learns from and when
  • Every decision is transparent - from ethical filtering to quality assessment
  • Complete audit trail - full history of all learning decisions and violations

🎯 Ethical AI - Our Highest Priority

We believe that ethics isn't a feature - it's the foundation. StillMe is built with unwavering principles:

  • Safety First: Harmful content filtered at the source
  • Cultural Fairness: Respects global diversity and perspectives
  • Full Accountability: Every mistake is public and corrected
  • Community Control: You decide what's acceptable, not corporations

"We challenge the AI community to choose: Support transparency and ethics, or remain silent and admit they don't care."

πŸ”’ Privacy & Data Protection

  • No personal data collection - learns only from public sources
  • Self-hosted codebase - you maintain complete control over your data
  • Delete anytime - your data, your rules, your control

πŸ›‘οΈ Ethical AI Transparency

StillMe features the world's first completely transparent ethical filtering system:

  • Complete Visibility: All ethical violations are logged and visible
  • Open Source: Filtering rules and algorithms are publicly available
  • Community Driven: Blacklist and rules can be managed by the community
  • Audit Trail: Full history of all ethical decisions and violations
  • Configurable: Ethics level can be adjusted based on community needs

This transparency ensures StillMe learns responsibly while maintaining community trust.

πŸš€ Quick Start

# Clone repository
git clone https://github.com/anhmtk/stillme_ai_ipc.git
cd stillme_ai_ipc

# Install dependencies
pip install -r requirements.txt

# Start backend
python start_backend.py

# Start frontend (new terminal)
python start_frontend.py

πŸ“Š Dashboard Features

  • Evolution Panel: Real-time AI stage and progress tracking
  • Ethical Filter: Complete transparency into ethical decisions
  • Learning Analytics: Historical progress with flexible timeline analysis
  • Community Controls: Manage ethical rules and blacklist
  • Raw Data Access: View actual API responses for verification

🧬 AI Evolution Stages

StillMe progresses through distinct developmental stages:

🍼 Infant Stage (0-100 learning sessions)

  • Basic pattern recognition
  • Simple content categorization
  • High safety focus
  • Manual approval required

πŸ‘Ά Child Stage (100-500 sessions)

  • Improved content understanding
  • Basic reasoning capabilities
  • Selective auto-approval
  • Enhanced safety protocols

πŸ§‘ Adolescent Stage (500-1000 sessions)

  • Advanced reasoning
  • Context awareness
  • Smart auto-approval
  • Balanced learning approach

🧠 Adult Stage (1000+ sessions)

  • Sophisticated understanding
  • Complex reasoning
  • Autonomous learning
  • Expert-level knowledge

πŸš€ The Vision: Fully Autonomous AI Evolution

🧠 Self-Evolution Goal

StillMe aims to become a fully autonomous learning AI:

  • Self-Assessment: Knows what it knows and what it doesn't
  • Proactive Learning: Actively seeks new knowledge sources
  • Self-Optimization: Adjusts learning process based on effectiveness
  • Autonomous Review: Gradually reduces human dependency as trust builds

πŸ”¬ Future Evolution Pathways

We open these questions to the community:

  • AI Self-Coding? - Should StillMe learn to debug and improve itself?
  • Red Team vs Blue Team? - AI attacking and defending itself for enhanced security?
  • Multi-Agent Collaboration? - Multiple StillMe instances collaborating on complex problems?
  • Cross-Domain Learning? - Expanding from AI to medicine, science, and other fields?

"This isn't our roadmap - it's a community discussion. What direction do you want AI's future to take?"

πŸ”§ Architecture

Backend (FastAPI)

  • Learning Engine: Core evolutionary learning system
  • RSS Pipeline: Multi-source content fetching
  • Ethical Filter: Comprehensive safety system
  • Memory Management: Advanced knowledge storage
  • API Integration: Public APIs for diverse content

Frontend (Streamlit)

  • Dashboard: Real-time monitoring and control
  • Evolution Panel: AI stage visualization
  • Ethical Controls: Community management tools
  • Analytics: Historical learning data
  • Chat Interface: Interactive AI communication

Database (SQLite)

  • Learning Sessions: Track AI evolution progress
  • Content Proposals: Store learning opportunities
  • Memory Items: Advanced knowledge storage
  • Ethical Violations: Complete audit trail

πŸ“š Learning Sources

StillMe learns from diverse, trusted sources:

RSS Feeds

  • Hacker News, Reddit, GitHub
  • TechCrunch, ArXiv, Stack Overflow
  • Medium, Academic sources
  • News outlets, Subreddits

Public APIs

  • NewsAPI, GNews
  • Weather, Finance data
  • Translation services
  • Image understanding APIs

πŸ›‘οΈ Ethical Safety Filter

StillMe features a comprehensive ethical content filtering system that ensures responsible AI learning:

Core Principles

  • Beneficence: Content must benefit learning and users
  • Non-Maleficence: Blocks harmful, toxic, or dangerous content
  • Autonomy: Protects privacy and personal information
  • Justice: Prevents biased or discriminatory content
  • Transparency: Complete visibility into all filtering decisions
  • Accountability: Full audit trail of ethical violations

Filtering Capabilities

  • Input Filtering: Blocks harmful content at the source (RSS/API)
  • Content Analysis: Detects toxicity, bias, and sensitive topics
  • PII Protection: Automatically identifies and blocks personal information
  • Source Validation: Flags unreliable or suspicious sources
  • Real-time Monitoring: Continuous ethical compliance checking

Transparency Features

  • Violation Logging: Complete history of all ethical violations
  • Dashboard Integration: Real-time ethical metrics and statistics
  • Community Management: Blacklist keywords and rules can be managed
  • Audit Trail: Full transparency into all ethical decisions
  • API Access: Programmatic access to ethical statistics and controls

πŸ”§ Configuration

Environment Setup

# Copy environment template
cp env.example .env

# Edit with your API keys
DEEPSEEK_API_KEY=sk-REPLACE_ME
OPENAI_API_KEY=sk-REPLACE_ME
ANTHROPIC_API_KEY=sk-REPLACE_ME

# Learning Configuration
MAX_DAILY_PROPOSALS=50
AUTO_APPROVAL_THRESHOLD=0.8
LEARNING_SESSION_HOUR=9

# Notification Configuration
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USERNAME=your_email@gmail.com
SMTP_PASSWORD=REPLACE_ME_WITH_YOUR_APP_PASSWORD
TELEGRAM_BOT_TOKEN=REPLACE_ME_WITH_YOUR_BOT_TOKEN
TELEGRAM_CHAT_ID=your_chat_id

# Notification Settings
NOTIFY_LEARNING=true
NOTIFY_ERRORS=true

πŸ“Š API Endpoints

Core Learning APIs

  • GET /api/learning/sessions - Get learning sessions
  • POST /api/learning/sessions/run - Trigger learning session
  • GET /api/learning/evolution/stage - Get current AI stage
  • GET /api/learning/stats - Get learning statistics

Content Management APIs

  • GET /api/learning/proposals - Get learning proposals
  • POST /api/learning/proposals/{id}/approve - Approve proposal
  • POST /api/learning/proposals/{id}/reject - Reject proposal
  • GET /api/learning/rss/pipeline-stats - Get RSS pipeline stats
  • POST /api/learning/rss/fetch-content - Fetch content manually

Ethical Safety APIs

  • GET /api/learning/ethics/stats - Get ethical filter statistics
  • POST /api/learning/ethics/check-content - Test content for ethical compliance
  • GET /api/learning/ethics/violations - Get ethical violation history
  • POST /api/learning/ethics/clear-violations - Clear violation log
  • POST /api/learning/ethics/add-blacklist-keyword - Add keyword to blacklist
  • GET /api/learning/ethics/blacklist-keywords - Get current blacklist

Advanced Features APIs

  • GET /api/learning/knowledge/stats - Get knowledge consolidation stats
  • POST /api/learning/knowledge/consolidate - Trigger knowledge consolidation
  • GET /api/learning/memory/stats - Get advanced memory management stats
  • POST /api/learning/memory/optimize - Optimize memory system

Analytics APIs

  • GET /api/learning/analytics/historical - Get historical learning data
  • GET /api/learning/analytics/comparison - Compare learning periods
  • GET /api/learning/analytics/trends - Get learning trends analysis

🀝 Contributing

We welcome contributions! See CONTRIBUTING.md for details.

Areas for Contribution

  • UI/UX Improvements: Dashboard enhancements, mobile responsiveness
  • Learning Sources: Add new RSS feeds and API integrations
  • Ethical Filtering: Improve safety algorithms and rules
  • Documentation: API docs, tutorials, guides
  • Testing: Unit tests, integration tests, performance tests

🀝 Join The Movement - Not Just A Project

🌍 Why You Can't Ignore StillMe

Whether you support or oppose it, StillMe forces you to pay attention:

  • If you support: You're helping build the future of transparent AI
  • If you oppose: You need to monitor what's happening
  • Either way: You can't deny the impact of a 100% open project

πŸ’¬ We Welcome All Voices

  • Supporting? - Contribute code, ideas, resources
  • Criticizing? - Point out flaws, suggest improvements
  • Skeptical? - Check the code, test the system, find vulnerabilities
  • Interested? - Follow, share, discuss

"In a world of AI 'black boxes', our transparency is our strongest weapon."

πŸ’– Support The Mission

This project is maintained by passion and community contributions. If you believe in the mission:

  • πŸ“§ Donate: [Donation link will be announced] - help maintain servers and development
  • πŸ”§ Contribute: Code, docs, testing, translations - every contribution matters
  • πŸ“’ Spread The Word: Share with developer and researcher communities
  • 🎯 Provide Feedback: Criticize, suggest, propose - help us improve

🎯 The Bottom Line

StillMe isn't just a product - it's a STATEMENT:

  1. AI must be transparent - no exceptions
  2. Ethics must be foundational - not an add-on feature
  3. Community must control - not corporations
  4. Future must be discussed - not imposed

Join us. Watch us. Critique us. But you can't ignore us.

Because in the darkness of AI "black boxes", our transparency is the light.

πŸ“„ License

MIT License - see LICENSE for details.

πŸ™ Acknowledgments

StillMe is built with love and dedication to create a truly transparent, ethical AI system. Special thanks to:

  • OpenAI for GPT models and API access
  • DeepSeek for advanced AI capabilities
  • Anthropic for Claude integration
  • The Open Source Community for inspiration and support

StillMe - Self-Evolving AI System with Complete Ethical Transparency πŸ€–βœ¨

"The future belongs to AI systems that can learn, adapt, and evolve. StillMe is that future, today."

About

No description, website, or topics provided.

Resources

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors