29 Commits
v1.0L ... linux

Author SHA1 Message Date
e6e2f5f9cd Init Astro + Tailwind + Shadcn-ui 2025-07-12 00:25:57 -03:00
fa6007c1f3 Revise README to reflect YouTube Video Classifier features and setup instructions 2025-07-12 00:25:57 -03:00
89314f9c74 Enhance logging and configuration for YouTube Video Classifier
- Added fallback model configuration in config.ini
- Updated requirements.txt to include rich for enhanced logging
- Refactored script.py to implement rich logging for better visibility
- Modified functions to include channel link extraction and updated CSV saving logic
- Improved error handling and user feedback throughout the script
2025-07-12 00:25:56 -03:00
8c4177dca0 Add language detection and detailed sub-tags generation for YouTube videos 2025-07-12 00:25:56 -03:00
7cf2b903a8 Update requirements and refactor script for video classification automation
- Updated pynput version in requirements.txt
- Refactored script.py to enhance video classification functionality using Ollama
- Added methods for video information extraction, classification, and CSV handling
- Improved browser initialization and error handling
2025-07-12 00:25:53 -03:00
bb86ef17f3 Remove unused image files from the project 2025-07-12 00:25:48 -03:00
5e10b5a8b6 Remove Dockerfile, docker-compose.yml, and entrypoint script for YouTube Video Classifier 2025-07-12 00:25:47 -03:00
cfc2301cb2 Update Ollama host in configuration file to use localhost 2025-07-12 00:25:47 -03:00
da8aee58f4 Add .gitignore to exclude virtual environment directory 2025-07-12 00:25:47 -03:00
f69a8e78f9 Remove devcontainer usage 2025-07-12 00:25:40 -03:00
a770350f48 Add configuration file for YouTube Video Classifier 2025-07-12 00:25:40 -03:00
15e9a5dec8 Add setup script for Qwen2.5VL model in Ollama container 2025-07-12 00:25:40 -03:00
fd4645b710 Add test script for verifying Ollama connection and Qwen2.5-VL model 2025-07-12 00:25:40 -03:00
ae4ec9b96e Docker and scripts 2025-07-12 00:25:40 -03:00
0b98aa9799 Add demo script for YouTube video classification and include sample thumbnail 2025-07-12 00:25:40 -03:00
19c25908da Devcontainer setup 2025-07-12 00:25:40 -03:00
Francisco Pessano
f94ae9c31a Added .gitignore 2025-07-12 00:23:34 -03:00
Emi
78f971ec59 new comment 2025-07-07 19:27:53 -03:00
Emi
762540437f fixing one reference error 2025-07-07 17:43:12 -03:00
Emi
62d39224ca update readme 2025-07-07 17:42:41 -03:00
Emi
8972dc6aa0 relevant changes to fix errors 2025-07-07 15:38:39 -03:00
Emi
d2570ac709 new images 2025-07-07 15:37:34 -03:00
Emi
cbe20e3800 changes on readme 2025-07-06 20:39:01 -03:00
Emi
82e147ece5 changes on readme 2025-07-05 23:37:56 -03:00
Emi
cccd556675 refactor on readme 2025-07-05 23:34:45 -03:00
Emi
72b8b39076 fix error on browser location error 2025-07-05 23:21:38 -03:00
Emi
08dc823afb readme to linux branch 2025-07-05 23:18:00 -03:00
Emi
a1b11a2265 little changes on script 2025-07-05 23:16:40 -03:00
Emi
fa5df4c16c first commit to linux 2025-07-05 22:47:37 -03:00
28 changed files with 6881 additions and 36 deletions

28
.gitignore vendored Normal file
View File

@@ -0,0 +1,28 @@
venv
temp_thumbnail.png
video_classifications.csv
# build output
dist/
# generated types
.astro/
# dependencies
node_modules/
# logs
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
# environment variables
.env
.env.production
# macOS-specific files
.DS_Store
# jetbrains setting folder
.idea/

5
.vscode/extensions.json vendored Normal file
View File

@@ -0,0 +1,5 @@
{
"recommendations": [
"astro-build.astro-vscode"
]
}

214
README.md Normal file
View File

@@ -0,0 +1,214 @@
# YouTube Video Classifier
An AI-powered tool that automatically classifies YouTube videos in your "Watch Later" playlist based on their titles and thumbnails using vision-language models through Ollama.
## Features ✨
- 🤖 **AI-Powered Classification**: Uses Ollama with Qwen2.5-VL and fallback models to analyze video titles and thumbnails
- 🔄 **Robust LLM Integration**: Automatic fallback between models with increasing timeouts for reliability
- 📊 **Comprehensive CSV Storage**: Saves detailed video information including classifications, metadata, and thumbnails
- 🌐 **Multi-language Detection**: Automatically detects video language using AI
- 🏷️ **Smart Tagging**: Generates detailed sub-tags for better content organization
- 🎯 **Smart Categories**: Uses existing classifications or creates new ones automatically
- 🖥️ **Browser Automation**: Selenium-based interaction with YouTube for reliable data extraction
- 🎨 **Beautiful Logging**: Rich console output with colors and emojis for better UX
- ⌨️ **Easy Control**: Press 'q' at any time to safely quit the process
## Quick Start
### Prerequisites
- Python 3.11.10+
- Ollama installed locally
- Chrome or Chromium browser
### Setup
1. **Install Ollama**: Download from [https://ollama.ai](https://ollama.ai)
2. **Pull Required Models**:
```bash
ollama pull qwen2.5vl:7b
ollama pull gemma2:2b
```
3. **Start Ollama Service**:
```bash
ollama serve
```
4. **Clone and Setup Project**:
```bash
git clone <repository-url>
cd youtube-video-classifier
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
```
5. **Configure Settings** (optional):
Edit `config.ini` to customize your setup
6. **Run the Classifier**:
```bash
python script.py
```
## How It Works 🔄
1. **Browser Initialization**: Opens Chrome/Chromium and navigates to your YouTube "Watch Later" playlist
2. **Video Detection**: Finds and extracts information from playlist videos using Selenium
3. **Data Extraction**: Captures video title, thumbnail, channel info, duration, and upload date
4. **AI Analysis**: Uses Ollama models to:
- Classify the video into categories
- Detect the primary language
- Generate detailed sub-tags
5. **Smart Fallback**: If primary model fails/times out, automatically switches to fallback model
6. **Data Storage**: Saves all information to CSV with base64-encoded thumbnails
7. **Playlist Management**: Removes processed videos from "Watch Later" playlist
8. **Continuous Processing**: Continues until all videos are processed or user quits
## Configuration
The `config.ini` file allows you to customize various settings:
```ini
[DEFAULT]
# Ollama settings
ollama_host = http://localhost:11434
ollama_model = qwen2.5vl:7b
ollama_fallback_model = gemma2:2b
# File paths
classifications_csv = video_classifications.csv
playlist_url = https://www.youtube.com/playlist?list=WL
# LLM timeout settings (in seconds)
llm_primary_timeout = 60
llm_fallback_timeout = 60
# Processing settings
enable_delete = false
enable_playlist_creation = false
```
## CSV Output Format 📋
The script creates a comprehensive CSV file with the following columns:
- `video_title`: Title of the video
- `video_url`: YouTube URL of the video
- `thumbnail_url`: Path to the saved thumbnail
- `classification`: AI-generated category
- `language`: Detected language of the video
- `channel_name`: Name of the YouTube channel
- `channel_link`: URL to the channel
- `video_length_seconds`: Duration in seconds
- `video_date`: Upload date
- `detailed_subtags`: AI-generated specific tags
- `playlist_name`: Source playlist name
- `playlist_link`: Source playlist URL
- `image_data`: Base64-encoded thumbnail data
- `timestamp`: When the classification was made
## File Structure 📁
```
├── script.py # Main classification script
├── config.ini # Configuration settings
├── requirements.txt # Python dependencies
├── video_classifications.csv # Generated results (created when first run)
└── README.md # This file
```
## Features in Detail
### AI Classification System
- **Primary Model**: Qwen2.5-VL 7B for high-quality vision-language analysis
- **Fallback Model**: Gemma2 2B for faster processing when primary model is slow
- **Timeout Management**: Automatically increases timeout periods if models are struggling
- **Continuous Retry**: Keeps trying until successful or user cancels
### Data Extraction
- **Video Metadata**: Title, URL, duration, upload date
- **Channel Information**: Name and link to channel
- **Thumbnail Capture**: Screenshots saved as base64 in CSV
- **Playlist Context**: Source playlist name and URL
### Browser Automation
- **Multiple Chrome Paths**: Automatically finds Chrome/Chromium installation
- **WebDriver Management**: Handles chromedriver setup and fallbacks
- **Robust Selectors**: Multiple CSS selectors for reliable element finding
- **Error Recovery**: Graceful handling of UI changes and loading delays
### User Experience
- **Rich Console Output**: Colored logging with emojis and status indicators
- **Progress Tracking**: Clear indication of current processing status
- **Safe Exit**: Press 'q' at any time to cleanly stop processing
- **Error Reporting**: Detailed error messages for troubleshooting
## Testing Your Setup
Before running the main script, you can test individual components:
1. **Test Ollama Connection**:
```python
import requests
response = requests.get('http://localhost:11434/api/tags')
print(response.json())
```
2. **Test Browser Automation**:
Run the script and check if Chrome opens correctly
3. **Test Model Response**:
The script will verify model availability on startup
## Troubleshooting 🔧
### Common Issues
**Ollama Connection Error**:
- Ensure Ollama is running: `ollama serve`
- Check the host URL in config.ini
- Verify models are installed: `ollama list`
**Browser Issues**:
- Install Chrome or Chromium
- Update chromedriver if needed
- Check if browser is in PATH
**Model Timeout**:
- The script automatically handles timeouts with fallback
- Consider increasing timeout values in config.ini
- Ensure sufficient system resources
**Selenium Errors**:
- YouTube may have changed their HTML structure
- Check for browser updates
- Verify you're logged into YouTube
### Performance Tips
- **For faster processing**: Use smaller models like `gemma2:2b` as primary
- **For better accuracy**: Use larger models like `qwen2.5vl:7b` as primary
- **For stability**: Keep both models installed for automatic fallback
- **For large playlists**: Consider running in smaller batches
## Contributing
1. Fork the repository
2. Create a feature branch
3. Test your changes thoroughly
4. Submit a pull request
## License
MIT License - see LICENSE file for details
---
**Note**: This tool is for personal use and educational purposes. Please respect YouTube's Terms of Service and rate limits.

30
config.ini Normal file
View File

@@ -0,0 +1,30 @@
# Configuration file for YouTube Video Classifier
[DEFAULT]
# Ollama settings
ollama_host = http://localhost:11434
ollama_model = qwen2.5vl:7b
ollama_fallback_model = gemma2:2b
# File paths
classifications_csv = video_classifications.csv
browser_image = brave.png
# YouTube settings
playlist_url = https://www.youtube.com/playlist?list=WL
# Image recognition settings
confidence_threshold = 0.8
search_timeout = 0.5
sleep_duration = 0.2
# Processing settings
restart_tab_frequency = 90
enable_delete = false
enable_playlist_creation = false
# LLM timeout settings (in seconds)
llm_primary_timeout = 60
llm_fallback_timeout = 60

140
demo_classification.py Normal file
View File

@@ -0,0 +1,140 @@
#!/usr/bin/env python3
"""
Demo script showing how the video classification works
"""
import requests
import base64
import time
import configparser
config = configparser.ConfigParser()
config.read('config.ini')
ollama_host = config.get('DEFAULT', 'ollama_host', fallback='http://ollama:11434')
def classify_demo_video(video_obj):
"""Demonstrate video classification."""
try:
# If there's a thumbnail, convert image to base64
if video_obj.get('thumbnail'):
with open(video_obj['thumbnail'], "rb") as image_file:
print(image_file.read())
image_data = base64.b64encode(image_file.read()).decode('utf-8')
print(image_data)
else:
image_data = None
existing_classifications = ["Tech Reviews", "Cooking", "Gaming", "Music"]
prompt = f"""
Please classify this YouTube video based on its title and thumbnail.
Video Title: {video_obj['title']}
Existing Classifications: {", ".join(existing_classifications)}
Instructions:
1. If the video fits into one of the existing classifications, use that exact classification name.
2. If the video doesn't fit any existing classification, create a new appropriate classification name.
3. Classification names should be concise (1-3 words) and descriptive.
4. Examples of good classifications: "Tech Reviews", "Cooking", "Gaming", "Education", "Music", "Comedy", etc.
5. Respond with ONLY the classification name, nothing else.
"""
print(f"Classifying: '{video_obj['title']}'")
print(f"Using thumbnail: {video_obj.get('thumbnail', 'None')}")
print("Sending request to Ollama...")
# Prepare the request payload
payload = {
'model': 'qwen2.5vl:7b',
'prompt': prompt,
'stream': False
}
# Only include images if image_data is available
if image_data:
payload['images'] = [image_data]
response = requests.post(
f'{ollama_host}/api/generate',
json=payload,
timeout=60
)
if response.status_code == 200:
result = response.json()
classification = result['response'].strip().strip('"\'')
print(f"✅ Classification: '{classification}'")
return classification
else:
print(f"❌ Error: {response.status_code}")
return "Uncategorized"
except Exception as e:
print(f"❌ Error: {e}")
return "Uncategorized"
def run_demo():
"""Run classification demo with sample videos."""
sample_videos = [
{
"title": "I can't believe this change!",
"thumbnail": "img/iphone_thumbnail.png"
},
{
"title": "iPhone 15 Pro Review - Best Camera Phone?"
},
{
"title": "Easy Pasta Recipe for Beginners",
},
{
"title": "Minecraft Survival Guide - Episode 1",
},
{
"title": "Classical Piano Music for Studying",
},
{
"title": "Machine Learning Explained Simply",
},
]
print("YouTube Video Classification Demo")
print("=" * 40)
results = []
for i, video_obj in enumerate(sample_videos, 1):
print(f"\n--- Demo {i}/{len(sample_videos)} ---")
# Classify the video
classification = classify_demo_video(video_obj)
results.append((video_obj['title'], classification))
time.sleep(1) # Be nice to the API
print("\n" + "=" * 40)
print("DEMO RESULTS:")
print("=" * 40)
for title, classification in results:
print(f"{classification:15} | {title}")
print("\nDemo complete! The script can:")
print("• Use existing categories when appropriate")
print("• Create new categories for unique content")
print("• Analyze both title and thumbnail information")
if __name__ == '__main__':
print(ollama_host)
# Check if Ollama is running
try:
response = requests.get(f'{ollama_host}/api/tags', timeout=5)
if response.status_code != 200:
print("❌ Ollama is not running. Please start it with: ollama serve")
exit(1)
except:
print("❌ Cannot connect to Ollama. Please start it with: ollama serve")
exit(1)
run_demo()

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.3 KiB

BIN
img/iphone_thumbnail.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 456 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 348 B

View File

@@ -2,4 +2,13 @@
opencv-python==4.11.0.86
pillow==11.3.0
PyAutoGUI==0.9.54
keyboard==0.13.5
pynput==1.8.1
requests==2.31.0
pandas~=2.3.1
ollama==0.2.1
configparser==6.0.0
pyperclip==1.8.2
pytesseract==0.3.10
selenium==4.15.2
webdriver-manager==4.0.1
rich==13.8.0

1314
script.py

File diff suppressed because it is too large Load Diff

63
setup.sh Executable file
View File

@@ -0,0 +1,63 @@
#!/bin/bash
# YouTube Video Classifier Setup Script
echo "🎬 YouTube Video Classifier Setup"
echo "=================================="
# Check if Python 3.11 is available
if ! command -v python3 &> /dev/null; then
echo "❌ Python 3.11 not found. Please install Python 3.11.10"
exit 1
fi
echo "✅ Python 3.11 found"
# Create virtual environment
echo "📦 Creating virtual environment..."
python3 -m venv venv
# Activate virtual environment
echo "🔧 Activating virtual environment..."
source venv/bin/activate
# Install requirements
echo "📥 Installing Python dependencies..."
pip install -r requirements.txt
# Check if Ollama is installed
if ! command -v ollama &> /dev/null; then
echo "❌ Ollama not found. Please install Ollama from https://ollama.ai"
echo " After installation, run:"
echo " 1. ollama serve"
echo " 2. ollama pull qwen2.5-vl:7b"
exit 1
fi
echo "✅ Ollama found"
# Check if Ollama is running
if ! curl -s http://localhost:11434/api/tags &> /dev/null; then
echo "⚠️ Ollama is not running. Starting Ollama..."
ollama serve &
sleep 5
fi
# Pull Qwen2.5VL model
echo "🤖 Pulling Qwen2.5VL model..."
ollama pull qwen2.5vl:7b
# Test setup
echo "🧪 Testing setup..."
python test_ollama.py
echo "✅ Setup complete!"
echo ""
echo "Next steps:"
echo "1. Make sure your browser is pinned to the taskbar"
echo "2. Update the browser image in img/ folder if needed"
echo "3. Run: python script.py"
echo ""
echo "Optional:"
echo "- Run demo: python demo_classification.py"
echo "- Analyze results: python playlist_manager.py --analyze"

114
setup_model.py Normal file
View File

@@ -0,0 +1,114 @@
#!/usr/bin/env python3
"""
Script to ensure the Qwen2.5VL model is available in the Ollama container
"""
import configparser
import time
import sys
import requests
def load_config():
"""Load configuration from config.ini"""
config = configparser.ConfigParser()
config.read('config.ini')
ollama_host = config.get('DEFAULT', 'ollama_host', fallback='http://ollama:11434')
ollama_model = config.get('DEFAULT', 'ollama_model', fallback='qwen2.5vl:7b')
return ollama_host, ollama_model
def wait_for_ollama(host, max_attempts=30):
"""Wait for Ollama container to be ready"""
print(f"⏳ Waiting for Ollama container at {host}...")
for attempt in range(1, max_attempts + 1):
try:
response = requests.get(f"{host}/api/tags", timeout=5)
if response.status_code == 200:
print("✅ Ollama container is ready!")
return True
except requests.exceptions.RequestException:
pass
print(f" Attempt {attempt}/{max_attempts} - waiting...")
time.sleep(2)
print("❌ Ollama container is not responding after maximum attempts")
return False
def check_model_exists(host, model_name):
"""Check if the model is already available"""
try:
response = requests.get(f"{host}/api/tags", timeout=5)
if response.status_code == 200:
models = response.json()
model_names = [model['name'] for model in models.get('models', [])]
return any(model_name in name for name in model_names), model_names
return False, []
except requests.exceptions.RequestException as e:
print(f"❌ Error checking models: {e}")
return False, []
def pull_model(host, model_name):
"""Pull the model from Ollama"""
print(f"📥 Pulling model '{model_name}' (this may take several minutes)...")
try:
response = requests.post(
f"{host}/api/pull",
json={"name": model_name},
timeout=600 # 10 minutes timeout
)
if response.status_code == 200:
print(f"✅ Successfully pulled model '{model_name}'")
return True
else:
print(f"❌ Failed to pull model: HTTP {response.status_code}")
print(f"Response: {response.text}")
return False
except requests.exceptions.RequestException as e:
print(f"❌ Error pulling model: {e}")
return False
def main():
"""Main function to set up the model"""
print("🔧 Model Setup for YouTube Video Classifier")
print("=" * 50)
# Load configuration
try:
ollama_host, ollama_model = load_config()
print("📋 Configuration:")
print(f" Host: {ollama_host}")
print(f" Model: {ollama_model}")
print()
except Exception as e:
print(f"❌ Failed to load configuration: {e}")
sys.exit(1)
# Wait for Ollama to be ready
if not wait_for_ollama(ollama_host):
sys.exit(1)
# Check if model exists
model_exists, available_models = check_model_exists(ollama_host, ollama_model)
if model_exists:
print(f"✅ Model '{ollama_model}' is already available!")
else:
print(f"📋 Available models: {available_models}")
print(f"❌ Model '{ollama_model}' not found")
if pull_model(ollama_host, ollama_model):
print(f"🎉 Model '{ollama_model}' is now ready for use!")
else:
print(f"❌ Failed to set up model '{ollama_model}'")
sys.exit(1)
print("\n🎬 YouTube Video Classifier is ready!")
print("🧪 Run 'python test_ollama.py' to verify the setup")
if __name__ == "__main__":
main()

99
test_ollama.py Normal file
View File

@@ -0,0 +1,99 @@
#!/usr/bin/env python3
"""
Test script to verify Ollama connection and Qwen2.5-VL model
"""
import requests
import configparser
def load_config():
"""Load configuration from config.ini"""
config = configparser.ConfigParser()
config.read('config.ini')
ollama_host = config.get('DEFAULT', 'ollama_host', fallback='http://ollama:11434')
ollama_model = config.get('DEFAULT', 'ollama_model', fallback='qwen2.5vl:7b')
return ollama_host, ollama_model
def test_ollama_connection(host, model_name):
"""Test if Ollama is running and accessible."""
try:
response = requests.get(f'{host}/api/tags', timeout=5)
if response.status_code == 200:
models = response.json()
print("✅ Ollama is running!")
model_names = [model['name'] for model in models.get('models', [])]
print(f"Available models: {model_names}")
# Check if the configured model is available
model_available = any(model_name in name for name in model_names)
if model_available:
print(f"{model_name} model is available!")
else:
print(f"{model_name} model not found. Available models: {model_names}")
print(f"Model may still be downloading. Check with: curl {host}/api/tags")
return True
else:
print(f"❌ Ollama responded with status code: {response.status_code}")
return False
except requests.exceptions.ConnectionError:
print("❌ Cannot connect to Ollama container. Is the ollama service running?")
print("💡 Try: docker-compose up -d ollama")
return False
except Exception as e:
print(f"❌ Error checking Ollama: {e}")
return False
def test_classification(host, model_name):
"""Test a simple classification without image."""
try:
response = requests.post(
f'{host}/api/generate',
json={
'model': model_name,
'prompt': 'Classify this video title into a category: "How to Cook Pasta - Italian Recipe Tutorial". Respond with only the category name.',
'stream': False
},
timeout=30
)
if response.status_code == 200:
result = response.json()
classification = result['response'].strip()
print(f"✅ Test classification successful: '{classification}'")
return True
else:
print(f"❌ Classification test failed: {response.status_code}")
print(f"Response: {response.text}")
return False
except Exception as e:
print(f"❌ Error testing classification: {e}")
return False
if __name__ == '__main__':
print("Testing Ollama setup for YouTube Video Classifier...")
print("-" * 50)
# Load configuration
try:
ollama_host, ollama_model = load_config()
print(f"📋 Configuration:")
print(f" Host: {ollama_host}")
print(f" Model: {ollama_model}")
print()
except Exception as e:
print(f"❌ Failed to load configuration: {e}")
exit(1)
if test_ollama_connection(ollama_host, ollama_model):
print("\nTesting classification...")
test_classification(ollama_host, ollama_model)
print("\nSetup verification complete!")
print("\nIf all tests passed, you can run the main script with: python script.py")
print("If any tests failed, please:")
print("1. Make sure the ollama container is running: docker-compose up -d ollama")
print(f"2. Wait for the model to download: curl {ollama_host}/api/tags")
print("3. Check container logs: docker-compose logs ollama")

13
web/README.md Normal file
View File

@@ -0,0 +1,13 @@
# Astro with Tailwind
```sh
pnpm create astro@latest -- --template with-tailwindcss
```
[![Open in StackBlitz](https://developer.stackblitz.com/img/open_in_stackblitz.svg)](https://stackblitz.com/github/withastro/astro/tree/latest/examples/with-tailwindcss)
[![Open with CodeSandbox](https://assets.codesandbox.io/github/button-edit-lime.svg)](https://codesandbox.io/p/sandbox/github/withastro/astro/tree/latest/examples/with-tailwindcss)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/withastro/astro?devcontainer_path=.devcontainer/with-tailwindcss/devcontainer.json)
Astro comes with [Tailwind](https://tailwindcss.com) support out of the box. This example showcases how to style your Astro project with Tailwind.
For complete setup instructions, please see our [Tailwind Integration Guide](https://docs.astro.build/en/guides/integrations-guide/tailwind).

14
web/astro.config.mjs Normal file
View File

@@ -0,0 +1,14 @@
// @ts-check
import { defineConfig } from 'astro/config';
import tailwindcss from '@tailwindcss/vite';
import react from '@astrojs/react';
// https://astro.build/config
export default defineConfig({
vite: {
plugins: [tailwindcss()]
},
integrations: [react()]
});

21
web/components.json Normal file
View File

@@ -0,0 +1,21 @@
{
"$schema": "https://ui.shadcn.com/schema.json",
"style": "new-york",
"rsc": false,
"tsx": true,
"tailwind": {
"config": "",
"css": "src/styles/global.css",
"baseColor": "neutral",
"cssVariables": true,
"prefix": ""
},
"aliases": {
"components": "@/components",
"utils": "@/lib/utils",
"ui": "@/components/ui",
"lib": "@/lib",
"hooks": "@/hooks"
},
"iconLibrary": "lucide"
}

32
web/package.json Normal file
View File

@@ -0,0 +1,32 @@
{
"name": "web",
"type": "module",
"version": "0.0.1",
"scripts": {
"dev": "astro dev",
"build": "astro build",
"preview": "astro preview",
"astro": "astro"
},
"dependencies": {
"@astrojs/mdx": "^4.3.0",
"@astrojs/react": "^4.3.0",
"@radix-ui/react-slot": "^1.2.3",
"@tailwindcss/vite": "^4.1.3",
"@types/canvas-confetti": "^1.9.0",
"@types/react": "^19.1.8",
"@types/react-dom": "^19.1.6",
"astro": "^5.11.0",
"canvas-confetti": "^1.9.3",
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
"lucide-react": "^0.525.0",
"react": "^19.1.0",
"react-dom": "^19.1.0",
"tailwind-merge": "^3.3.1",
"tailwindcss": "^4.1.3"
},
"devDependencies": {
"tw-animate-css": "^1.3.5"
}
}

4508
web/pnpm-lock.yaml generated Normal file

File diff suppressed because it is too large Load Diff

9
web/public/favicon.svg Normal file
View File

@@ -0,0 +1,9 @@
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 128 128">
<path d="M50.4 78.5a75.1 75.1 0 0 0-28.5 6.9l24.2-65.7c.7-2 1.9-3.2 3.4-3.2h29c1.5 0 2.7 1.2 3.4 3.2l24.2 65.7s-11.6-7-28.5-7L67 45.5c-.4-1.7-1.6-2.8-2.9-2.8-1.3 0-2.5 1.1-2.9 2.7L50.4 78.5Zm-1.1 28.2Zm-4.2-20.2c-2 6.6-.6 15.8 4.2 20.2a17.5 17.5 0 0 1 .2-.7 5.5 5.5 0 0 1 5.7-4.5c2.8.1 4.3 1.5 4.7 4.7.2 1.1.2 2.3.2 3.5v.4c0 2.7.7 5.2 2.2 7.4a13 13 0 0 0 5.7 4.9v-.3l-.2-.3c-1.8-5.6-.5-9.5 4.4-12.8l1.5-1a73 73 0 0 0 3.2-2.2 16 16 0 0 0 6.8-11.4c.3-2 .1-4-.6-6l-.8.6-1.6 1a37 37 0 0 1-22.4 2.7c-5-.7-9.7-2-13.2-6.2Z" />
<style>
path { fill: #000; }
@media (prefers-color-scheme: dark) {
path { fill: #FFF; }
}
</style>
</svg>

After

Width:  |  Height:  |  Size: 749 B

View File

@@ -0,0 +1,19 @@
---
// Click button, get confetti!
// Styled by Tailwind :)
---
<button
class="appearance-none py-2 px-4 bg-purple-500 text-white font-semibold rounded-lg shadow-md hover:bg-purple-700 focus:outline-none focus:ring-2 focus:ring-purple-400 focus:ring-opacity-75"
>
<slot />
</button>
<script>
import confetti from 'canvas-confetti';
const button = document.body.querySelector('button');
if (button) {
button.addEventListener('click', () => confetti());
}
</script>

View File

@@ -0,0 +1,59 @@
import * as React from "react"
import { Slot } from "@radix-ui/react-slot"
import { cva, type VariantProps } from "class-variance-authority"
import { cn } from "@/lib/utils"
const buttonVariants = cva(
"inline-flex items-center justify-center gap-2 whitespace-nowrap rounded-md text-sm font-medium transition-all disabled:pointer-events-none disabled:opacity-50 [&_svg]:pointer-events-none [&_svg:not([class*='size-'])]:size-4 shrink-0 [&_svg]:shrink-0 outline-none focus-visible:border-ring focus-visible:ring-ring/50 focus-visible:ring-[3px] aria-invalid:ring-destructive/20 dark:aria-invalid:ring-destructive/40 aria-invalid:border-destructive",
{
variants: {
variant: {
default:
"bg-primary text-primary-foreground shadow-xs hover:bg-primary/90",
destructive:
"bg-destructive text-white shadow-xs hover:bg-destructive/90 focus-visible:ring-destructive/20 dark:focus-visible:ring-destructive/40 dark:bg-destructive/60",
outline:
"border bg-background shadow-xs hover:bg-accent hover:text-accent-foreground dark:bg-input/30 dark:border-input dark:hover:bg-input/50",
secondary:
"bg-secondary text-secondary-foreground shadow-xs hover:bg-secondary/80",
ghost:
"hover:bg-accent hover:text-accent-foreground dark:hover:bg-accent/50",
link: "text-primary underline-offset-4 hover:underline",
},
size: {
default: "h-9 px-4 py-2 has-[>svg]:px-3",
sm: "h-8 rounded-md gap-1.5 px-3 has-[>svg]:px-2.5",
lg: "h-10 rounded-md px-6 has-[>svg]:px-4",
icon: "size-9",
},
},
defaultVariants: {
variant: "default",
size: "default",
},
}
)
function Button({
className,
variant,
size,
asChild = false,
...props
}: React.ComponentProps<"button"> &
VariantProps<typeof buttonVariants> & {
asChild?: boolean
}) {
const Comp = asChild ? Slot : "button"
return (
<Comp
data-slot="button"
className={cn(buttonVariants({ variant, size, className }))}
{...props}
/>
)
}
export { Button, buttonVariants }

View File

@@ -0,0 +1,17 @@
---
import { Button } from '@/components/ui/button';
import '../styles/global.css';
const { content } = Astro.props;
---
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width" />
<link rel="icon" type="image/svg+xml" href="/favicon.svg" />
<title>{content.title}</title>
</head>
<body>
<slot />
</body>
</html>

6
web/src/lib/utils.ts Normal file
View File

@@ -0,0 +1,6 @@
import { clsx, type ClassValue } from "clsx"
import { twMerge } from "tailwind-merge"
export function cn(...inputs: ClassValue[]) {
return twMerge(clsx(inputs))
}

27
web/src/pages/index.astro Normal file
View File

@@ -0,0 +1,27 @@
---
import '../styles/global.css';
// Component Imports
import Button from '../components/Button.astro';
import {Button as ShadcnButton} from '../components/ui/button.tsx';
// Full Astro Component Syntax:
// https://docs.astro.build/basics/astro-components/
---
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width" />
<link rel="icon" type="image/svg+xml" href="/favicon.svg" />
<meta name="generator" content={Astro.generator} />
<title>Astro + TailwindCSS</title>
</head>
<body>
<div class="grid place-items-center h-screen content-center">
<Button>Tailwind Button in Astro!</Button>
<a href="/markdown-page" class="p-4 underline">Markdown is also supported...</a>
<ShadcnButton>Shadcn Button</ShadcnButton>
</div>
</body>
</html>

View File

@@ -0,0 +1,16 @@
---
title: 'Markdown + Tailwind'
layout: ../layouts/main.astro
---
<div class="grid place-items-center h-screen content-center">
<div class="py-2 px-4 bg-purple-500 text-white font-semibold rounded-lg shadow-md">
Tailwind classes also work in Markdown!
</div>
<a
href="/"
class="p-4 underline hover:text-purple-500 transition-colors ease-in-out duration-200"
>
Go home
</a>
</div>

124
web/src/styles/global.css Normal file
View File

@@ -0,0 +1,124 @@
@import 'tailwindcss';
@import "tw-animate-css";
@custom-variant dark (&:is(.dark *));
@theme inline {
--radius-sm: calc(var(--radius) - 4px);
--radius-md: calc(var(--radius) - 2px);
--radius-lg: var(--radius);
--radius-xl: calc(var(--radius) + 4px);
--color-background: var(--background);
--color-foreground: var(--foreground);
--color-card: var(--card);
--color-card-foreground: var(--card-foreground);
--color-popover: var(--popover);
--color-popover-foreground: var(--popover-foreground);
--color-primary: var(--primary);
--color-primary-foreground: var(--primary-foreground);
--color-secondary: var(--secondary);
--color-secondary-foreground: var(--secondary-foreground);
--color-muted: var(--muted);
--color-muted-foreground: var(--muted-foreground);
--color-accent: var(--accent);
--color-accent-foreground: var(--accent-foreground);
--color-destructive: var(--destructive);
--color-border: var(--border);
--color-input: var(--input);
--color-ring: var(--ring);
--color-chart-1: var(--chart-1);
--color-chart-2: var(--chart-2);
--color-chart-3: var(--chart-3);
--color-chart-4: var(--chart-4);
--color-chart-5: var(--chart-5);
--color-sidebar: var(--sidebar);
--color-sidebar-foreground: var(--sidebar-foreground);
--color-sidebar-primary: var(--sidebar-primary);
--color-sidebar-primary-foreground: var(--sidebar-primary-foreground);
--color-sidebar-accent: var(--sidebar-accent);
--color-sidebar-accent-foreground: var(--sidebar-accent-foreground);
--color-sidebar-border: var(--sidebar-border);
--color-sidebar-ring: var(--sidebar-ring);
}
:root {
--radius: 0.625rem;
--background: oklch(1 0 0);
--foreground: oklch(0.145 0 0);
--card: oklch(1 0 0);
--card-foreground: oklch(0.145 0 0);
--popover: oklch(1 0 0);
--popover-foreground: oklch(0.145 0 0);
--primary: oklch(0.205 0 0);
--primary-foreground: oklch(0.985 0 0);
--secondary: oklch(0.97 0 0);
--secondary-foreground: oklch(0.205 0 0);
--muted: oklch(0.97 0 0);
--muted-foreground: oklch(0.556 0 0);
--accent: oklch(0.97 0 0);
--accent-foreground: oklch(0.205 0 0);
--destructive: oklch(0.577 0.245 27.325);
--border: oklch(0.922 0 0);
--input: oklch(0.922 0 0);
--ring: oklch(0.708 0 0);
--chart-1: oklch(0.646 0.222 41.116);
--chart-2: oklch(0.6 0.118 184.704);
--chart-3: oklch(0.398 0.07 227.392);
--chart-4: oklch(0.828 0.189 84.429);
--chart-5: oklch(0.769 0.188 70.08);
--sidebar: oklch(0.985 0 0);
--sidebar-foreground: oklch(0.145 0 0);
--sidebar-primary: oklch(0.205 0 0);
--sidebar-primary-foreground: oklch(0.985 0 0);
--sidebar-accent: oklch(0.97 0 0);
--sidebar-accent-foreground: oklch(0.205 0 0);
--sidebar-border: oklch(0.922 0 0);
--sidebar-ring: oklch(0.708 0 0);
}
.dark {
--background: oklch(0.145 0 0);
--foreground: oklch(0.985 0 0);
--card: oklch(0.205 0 0);
--card-foreground: oklch(0.985 0 0);
--popover: oklch(0.205 0 0);
--popover-foreground: oklch(0.985 0 0);
--primary: oklch(0.922 0 0);
--primary-foreground: oklch(0.205 0 0);
--secondary: oklch(0.269 0 0);
--secondary-foreground: oklch(0.985 0 0);
--muted: oklch(0.269 0 0);
--muted-foreground: oklch(0.708 0 0);
--accent: oklch(0.269 0 0);
--accent-foreground: oklch(0.985 0 0);
--destructive: oklch(0.704 0.191 22.216);
--border: oklch(1 0 0 / 10%);
--input: oklch(1 0 0 / 15%);
--ring: oklch(0.556 0 0);
--chart-1: oklch(0.488 0.243 264.376);
--chart-2: oklch(0.696 0.17 162.48);
--chart-3: oklch(0.769 0.188 70.08);
--chart-4: oklch(0.627 0.265 303.9);
--chart-5: oklch(0.645 0.246 16.439);
--sidebar: oklch(0.205 0 0);
--sidebar-foreground: oklch(0.985 0 0);
--sidebar-primary: oklch(0.488 0.243 264.376);
--sidebar-primary-foreground: oklch(0.985 0 0);
--sidebar-accent: oklch(0.269 0 0);
--sidebar-accent-foreground: oklch(0.985 0 0);
--sidebar-border: oklch(1 0 0 / 10%);
--sidebar-ring: oklch(0.556 0 0);
}
@layer base {
* {
@apply border-border outline-ring/50;
}
body {
@apply bg-background text-foreground;
}
}
button {
cursor: pointer;
}

20
web/tsconfig.json Normal file
View File

@@ -0,0 +1,20 @@
{
"extends": "astro/tsconfigs/strict",
"include": [
".astro/types.d.ts",
"**/*"
],
"exclude": [
"dist"
],
"compilerOptions": {
"jsx": "react-jsx",
"jsxImportSource": "react",
"baseUrl": ".",
"paths": {
"@/*": [
"./src/*"
]
}
}
}