Signed-off-by: Matt Bruce <mbrucedogs@gmail.com>

This commit is contained in:
Matt Bruce 2026-02-14 10:39:49 -06:00
parent d4e4520248
commit a458cf0ea6
42 changed files with 2631 additions and 0 deletions

0
AGENTS.md Normal file
View File

13
Makefile Normal file
View File

@ -0,0 +1,13 @@
.PHONY: setup run test build-mac-selfcontained
setup:
./run.sh --setup-only
run:
./run.sh
test: setup
. .venv/bin/activate && pytest -q
build-mac-selfcontained:
./scripts/build_selfcontained_mac_app.sh

1
ManeshTraderMac Submodule

@ -0,0 +1 @@
Subproject commit 82f326ea935cde47e1bff98face941499d8d0a98

View File

@ -0,0 +1,345 @@
# ManeshTrader - Architecture Plan
## Executive Summary
ManeshTrader is an analysis-only trading intelligence system that classifies OHLC bars into real/fake signals, derives trend state from real bars, and delivers visual insights, exports, and optional monitoring alerts. The architecture refactor moves from a single-process UI-first design to a modular, service-oriented design that improves testability, scalability, observability, and operational resilience.
## System Context
```mermaid
flowchart LR
Trader[Trader / Analyst] -->|Configure symbol, timeframe, filters| ManeshTrader[ManeshTrader Analysis Platform]
ManeshTrader -->|Fetch OHLC| Yahoo[Yahoo Finance Data API]
ManeshTrader -->|Optional notifications| Notify[Email / Push / Webhook]
ManeshTrader -->|Export CSV/PDF| Storage[(User Downloads / Object Storage)]
Admin[Operator] -->|Monitor health, logs, metrics| ManeshTrader
```
Overview: Defines external actors and dependencies around ManeshTrader.
Key Components: Trader, Operator, data provider, notification endpoints, export targets.
Relationships: Traders submit analysis requests; platform retrieves market data, computes classifications/trends, and returns charted/exported output.
Design Decisions: Keep execution out-of-scope (analysis-only boundary). External market source decoupled behind adapter interface.
NFR Considerations:
- Scalability: Stateless analysis workloads can scale horizontally.
- Performance: Cached market data and precomputed features reduce latency.
- Security: Read-only market data; no brokerage keys for execution.
- Reliability: Retry/fallback on upstream data issues.
- Maintainability: Clear system boundary reduces coupling.
Trade-offs: Dependency on third-party data availability/quality.
Risks and Mitigations:
- Risk: API throttling/outage. Mitigation: caching, backoff, alternate provider adapter.
## Architecture Overview
The target architecture uses modular services with explicit boundaries:
- Presentation: Web UI / API consumers
- Application: Strategy orchestration and workflow
- Domain: Bar classification and trend state machine
- Data: Provider adapters, cache, persistence
- Platform: auth, observability, notifications, exports
## Component Architecture
```mermaid
flowchart TB
UI[Web UI / Streamlit Frontend]
API[Analysis API Gateway]
ORCH[Analysis Orchestrator]
STRAT[Strategy Engine
(Real/Fake Classifier + Trend State Machine)]
BT[Backtest Evaluator]
ALERT[Alerting Service]
EXPORT[Export Service
(CSV/PDF)]
MKT[Market Data Adapter]
CACHE[(Market Cache)]
DB[(Analysis Store)]
OBS[Observability
Logs/Metrics/Traces]
UI --> API
API --> ORCH
ORCH --> STRAT
ORCH --> BT
ORCH --> ALERT
ORCH --> EXPORT
ORCH --> MKT
MKT --> CACHE
MKT --> ORCH
ORCH --> DB
API --> DB
API --> OBS
ORCH --> OBS
STRAT --> OBS
ALERT --> OBS
```
Overview: Internal modular decomposition for the refactored system.
Key Components:
- Analysis API Gateway: request validation and rate limiting.
- Analysis Orchestrator: coordinates data fetch, strategy execution, and response assembly.
- Strategy Engine: deterministic classification and trend transitions.
- Backtest Evaluator: lightweight historical scoring.
- Alerting/Export Services: asynchronous side effects.
- Market Data Adapter + Cache: provider abstraction and performance buffer.
Relationships: API delegates to orchestrator; orchestrator composes domain and side-effect services; observability is cross-cutting.
Design Decisions: Separate deterministic core logic from IO-heavy integrations.
NFR Considerations:
- Scalability: independent scaling for API, strategy workers, and alert/export processors.
- Performance: cache first, compute second, persist last.
- Security: centralized input validation and API policy.
- Reliability: queue-based side effects isolate failures.
- Maintainability: single-responsibility services and clear contracts.
Trade-offs: More infrastructure and operational complexity than monolith.
Risks and Mitigations:
- Risk: distributed debugging complexity. Mitigation: trace IDs and structured logs.
## Deployment Architecture
```mermaid
flowchart TB
subgraph Internet
U[User Browser]
end
subgraph Cloud[VPC]
LB[HTTPS Load Balancer / WAF]
subgraph AppSubnet[Private App Subnet]
UI[UI Service]
API[API Service]
WRK[Worker Service]
end
subgraph DataSubnet[Private Data Subnet]
REDIS[(Redis Cache)]
PG[(PostgreSQL)]
MQ[(Message Queue)]
OBJ[(Object Storage)]
end
OBS[Managed Monitoring]
SECRET[Secrets Manager]
end
EXT[Yahoo Finance / Alt Provider]
U --> LB --> UI
UI --> API
API --> REDIS
API --> PG
API --> MQ
WRK --> MQ
WRK --> PG
WRK --> OBJ
API --> EXT
WRK --> EXT
API --> OBS
WRK --> OBS
API --> SECRET
WRK --> SECRET
```
Overview: Logical production deployment with secure segmentation.
Key Components: WAF/LB edge, stateless app services, queue-driven workers, managed data stores.
Relationships: Request path stays synchronous through UI/API; heavy export/alert tasks handled asynchronously by workers.
Design Decisions: Network segmentation and managed services for resilience and lower ops overhead.
NFR Considerations:
- Scalability: autoscaling app and worker tiers.
- Performance: Redis cache and async task offload.
- Security: private subnets, secret manager, TLS at edge.
- Reliability: managed DB backup, queue durability.
- Maintainability: environment parity across dev/staging/prod.
Trade-offs: Managed services cost vs operational simplicity.
Risks and Mitigations:
- Risk: queue backlog under spikes. Mitigation: autoscaling workers and dead-letter queues.
## Data Flow
```mermaid
flowchart LR
R[User Request
symbol/timeframe/filters] --> V[Input Validation]
V --> C1{Cache Hit?}
C1 -->|Yes| D1[Load OHLC from Cache]
C1 -->|No| D2[Fetch OHLC from Provider]
D2 --> D3[Normalize + Time Alignment]
D3 --> D4[Persist to Cache]
D1 --> P1[Classify Bars
real_bull/real_bear/fake]
D4 --> P1
P1 --> P2[Trend State Machine
2-bar confirmation]
P2 --> P3[Backtest Snapshot]
P3 --> P4[Build Response Model]
P4 --> S1[(Analysis Store)]
P4 --> O[UI Chart + Metrics + Events]
P4 --> E[Export Job CSV/PDF]
P4 --> A[Alert Job]
```
Overview: End-to-end data processing lifecycle for each analysis request.
Key Components: validation, cache/provider ingestion, classification/trend processing, output assembly.
Relationships: deterministic analytics pipeline with optional async exports/alerts.
Design Decisions: Normalize data before strategy logic for deterministic outcomes.
NFR Considerations:
- Scalability: compute pipeline can be parallelized per request.
- Performance: cache avoids repeated provider calls.
- Security: strict input schema and output sanitization.
- Reliability: idempotent processing and persisted analysis snapshots.
- Maintainability: explicit stage boundaries simplify test coverage.
Trade-offs: Additional persistence adds write latency.
Risks and Mitigations:
- Risk: inconsistent timestamps across providers. Mitigation: canonical UTC normalization.
## Key Workflows
```mermaid
sequenceDiagram
participant User
participant UI
participant API
participant Data as Market Adapter
participant Engine as Strategy Engine
participant Alert as Alert Service
participant Export as Export Service
User->>UI: Submit symbol/timeframe/filters
UI->>API: Analyze request
API->>Data: Get closed OHLC bars
Data-->>API: Normalized bars
API->>Engine: Classify + detect trend
Engine-->>API: trend state, events, metrics
API-->>UI: chart model + events + backtest
alt New trend/reversal event
API->>Alert: Publish notification task
end
opt User requests export
UI->>API: Export CSV/PDF
API->>Export: Generate artifact
Export-->>UI: Download link/blob
end
```
Overview: Critical user flow from request to insight/alert/export.
Key Components: UI, API, data adapter, strategy engine, alert/export services.
Relationships: synchronous analysis, asynchronous side effects.
Design Decisions: Keep analytical response fast; move non-critical tasks to background.
NFR Considerations:
- Scalability: alert/export can scale separately.
- Performance: response prioritizes analysis payload.
- Security: permission checks around export and notification endpoints.
- Reliability: retries for failed async tasks.
- Maintainability: workflow contracts versioned via API schema.
Trade-offs: eventual consistency for async outputs.
Risks and Mitigations:
- Risk: duplicate alerts after retries. Mitigation: idempotency keys by event hash.
## Additional Diagram: Domain State Model
```mermaid
stateDiagram-v2
[*] --> Neutral
Neutral --> Bullish: 2 consecutive real_bull
Neutral --> Bearish: 2 consecutive real_bear
Bullish --> Bullish: fake OR single real_bear fluke
Bearish --> Bearish: fake OR single real_bull fluke
Bullish --> Bearish: 2 consecutive real_bear
Bearish --> Bullish: 2 consecutive real_bull
```
Overview: Strategy state transitions based on confirmed real-bar sequences.
Key Components: Neutral, Bullish, Bearish states; confirmation conditions.
Relationships: fake bars never reverse state; opposite single bar is non-reversal noise.
Design Decisions: enforce confirmation to reduce whipsaw.
NFR Considerations:
- Scalability: pure function state machine enables easy horizontal compute.
- Performance: O(n) per bar sequence.
- Security: deterministic logic reduces ambiguity and operator error.
- Reliability: explicit transitions avoid hidden side effects.
- Maintainability: state model is test-friendly and auditable.
Trade-offs: delayed reversals in fast inflection markets.
Risks and Mitigations:
- Risk: late entries due to confirmation lag. Mitigation: optional “early warning” non-trading signal.
## Phased Development
### Phase 1: Initial Implementation
- Single deployable web app with embedded analysis module.
- Basic data adapter, core strategy engine, charting, CSV export.
- Local logs and lightweight metrics.
### Phase 2+: Final Architecture
- Split API, workers, and dedicated alert/export services.
- Add cache + persistent analysis store + queue-driven async tasks.
- Multi-provider market adapter and hardened observability.
### Migration Path
1. Extract strategy logic into standalone domain module with unit tests.
2. Introduce API boundary and typed request/response contracts.
3. Externalize side effects (alerts/exports) into worker queue.
4. Add Redis caching and persistent analysis snapshots.
5. Enable multi-environment CI/CD and infrastructure-as-code.
## Non-Functional Requirements Analysis
### Scalability
Stateless API/services with autoscaling; async workers for bursty jobs; provider/caching abstraction to reduce upstream load.
### Performance
Cache-first ingestion, bounded bar windows, O(n) classification/state processing, deferred heavy exports.
### Security
WAF/TLS, secrets manager, strict request validation, RBAC for admin controls, audit logs for alert/export actions.
### Reliability
Queue retry policies, dead-letter queues, health probes, circuit breakers for upstream data sources, backup/restore for persistent stores.
### Maintainability
Layered architecture, clear contracts, domain isolation, test pyramid (unit/contract/integration), observability-first operations.
## Risks and Mitigations
- Upstream data inconsistency: normalize timestamps and schema at adapter boundary.
- Alert noise: debounce and idempotency keyed by symbol/timeframe/event timestamp.
- Cost growth with scale: autoscaling guardrails, TTL caches, export retention policy.
- Strategy misinterpretation: publish explicit strategy rules and state transition docs in product UI.
## Technology Stack Recommendations
- Frontend: Streamlit (MVP) then React/Next.js for multi-user production UX.
- API: FastAPI with pydantic contracts.
- Workers: Celery/RQ with Redis or managed queue.
- Storage: PostgreSQL for analysis metadata; object storage for export artifacts.
- Observability: OpenTelemetry + managed logging/metrics dashboards.
- Deployment: Containerized services on managed Kubernetes or serverless containers.
## Next Steps
1. Approve the phased architecture and target operating model.
2. Define API contracts for analysis request/response and event schema.
3. Implement Phase 1 module boundaries (domain/application/infrastructure).
4. Add core test suite for classification and trend state transitions.
5. Plan Phase 2 service split and infrastructure rollout.

167
ONBOARDING.md Normal file
View File

@ -0,0 +1,167 @@
# ManeshTrader Onboarding
## 1) What This Tool Does
ManeshTrader analyzes OHLC candles and labels each closed bar as:
- `real_bull`: close is above previous high (or previous body high if wick filter is enabled)
- `real_bear`: close is below previous low (or previous body low if wick filter is enabled)
- `fake`: close is inside previous range (noise)
Trend logic:
- 2 consecutive real bullish bars => bullish trend active
- 2 consecutive real bearish bars => bearish trend active
- Reversal requires 2 consecutive opposite real bars
- Fake bars do not reverse trend
## 2) Quick Start (Recommended)
From project root:
```bash
./run.sh
```
This script:
- creates `.venv` if needed
- installs dependencies from `requirements.txt`
- launches Streamlit
Then open the shown URL (usually `http://localhost:8501`).
## 3) macOS Double-Click Start
In Finder, double-click:
- `Run ManeshTrader.command`
If macOS blocks first run, execute once:
```bash
xattr -d com.apple.quarantine "Run ManeshTrader.command"
```
### Optional: Build a Real macOS `.app`
From project root:
```bash
./scripts/create_mac_app.sh
```
This generates `ManeshTrader.app`, which you can double-click like a normal app or move to `/Applications`.
Build a distributable installer DMG:
```bash
./scripts/create_installer_dmg.sh
```
Output: `ManeshTrader-YYYYMMDD-HHMMSS.dmg` in project root.
### Share With Non-Technical Users (Self-Contained)
Build a standalone app that includes Python/runtime:
```bash
./scripts/build_standalone_app.sh
```
Then build DMG from that standalone app path:
```bash
APP_BUNDLE_PATH="dist-standalone/<timestamp>/dist/ManeshTrader.app" ./scripts/create_installer_dmg.sh
```
Tip: sign and notarize before sharing broadly, so macOS trust prompts are reduced.
## 4) First Session Walkthrough
1. Set `Symbol` (examples: `AAPL`, `MSFT`, `BTC-USD`, `ETH-USD`).
2. Set `Timeframe` (start with `1d` to avoid noisy intraday data).
3. Set `Period` (try `6mo` initially).
4. Keep `Ignore potentially live last bar` ON.
5. Keep filters OFF for baseline:
- `Use previous body range (ignore wicks)` OFF
- `Enable volume filter` OFF
- `Hide market-closed gaps (stocks)` ON
6. Review top metrics:
- Current Trend
- Real Bullish Bars
- Real Bearish Bars
- Fake Bars
7. Read `Trend Events` for starts and reversals.
## 5) How To Read The Chart
- Candle layer: full price action
- Green triangle-up markers: `real_bull`
- Red triangle-down markers: `real_bear`
- Gray candles (if enabled): visually de-emphasized fake bars
- Volume bars are colored by trend state
## 6) Recommended Settings By Asset
### Stocks (swing)
- Timeframe: `1d`
- Period: `6mo` or `1y`
- Max bars: `300-800`
### Crypto (shorter horizon)
- Timeframe: `1h` or `4h`
- Period: `1mo-6mo`
- Enable auto-refresh only when monitoring live
## 7) Optional Filters
### Ignore Wicks
Use when long wicks create false breakouts; compares close to previous body only.
### Volume Filter
Treats low-volume bars as fake:
- Enable `Enable volume filter`
- Start with:
- `Volume SMA window = 20`
- `Min volume / SMA multiplier = 1.0`
### Hide Market-Closed Gaps (Stocks)
Compresses non-trading time on stock charts:
- `1d`: removes weekend spacing
- intraday (`1m`..`1h`): removes weekends and overnight closed hours
Use OFF for 24/7 markets (for example many crypto workflows) when you want continuous time.
## 8) Exports
- CSV: `Download classified data (CSV)`
- PDF chart: `Download chart (PDF)`
If PDF fails, ensure `kaleido` is installed (already in `requirements.txt`).
## 9) Troubleshooting
### App wont start
```bash
./run.sh --setup-only
./run.sh
```
### Port already in use
```bash
streamlit run app.py --server.port 8502
```
### Bad symbol/data error
- Verify ticker format (`BTC-USD`, not `BTCUSD`)
- Use compatible `Timeframe` + `Period`
### I still see some time gaps
- For stocks, keep `Hide market-closed gaps (stocks)` ON.
- Daily charts remove weekends; intraday removes weekends + closed hours.
- Some exchange holidays/half-days can still produce spacing depending on the data feed.
### Exports crash with timestamp errors
- Pull latest project changes (export logic now handles named index columns)
## 10) Safety Notes
- This app is analysis-only, no trade execution.
- Backtest snapshot is diagnostic and simplistic.
- Not financial advice.
## 11) Useful Commands
Setup only:
```bash
./run.sh --setup-only
```
Run tests:
```bash
make test
```
Run app:
```bash
make run
```

154
README.md Normal file
View File

@ -0,0 +1,154 @@
# Real Bars vs Fake Bars Trend Analyzer (Streamlit)
A Python web app that implements the **Real Bars vs Fake Bars** strategy:
- Classifies each closed bar against the prior bar as `real_bull`, `real_bear`, or `fake`
- Ignores fake bars for trend sequence logic
- Starts/continues trend on 2+ same-direction real bars
- Reverses trend only after 2 consecutive real bars of the opposite direction
## Features
- Symbol/timeframe/period inputs (Yahoo Finance via `yfinance`)
- Optional wick filtering (use previous candle body range)
- Optional volume filter (minimum Volume vs Volume-SMA)
- Trend events table (starts and reversals)
- Chart visualization (candles + real-bar markers + trend-colored volume)
- Market session gap handling for stocks via `Hide market-closed gaps (stocks)`:
- `1d`: removes weekend spacing
- intraday: removes weekends and closed-hour overnight spans
- Session alerts for newly detected trend events
- Export classified data to CSV
- Export chart to PDF (via Plotly + Kaleido)
- Lightweight backtest snapshot (signal at trend changes, scored by next-bar direction)
## Project Structure
- `app.py`: Streamlit entrypoint and UI orchestration
- `manesh_trader/constants.py`: intervals, periods, trend labels
- `manesh_trader/models.py`: data models (`TrendEvent`)
- `manesh_trader/data.py`: market data fetch and live-bar safeguard
- `manesh_trader/strategy.py`: bar classification and trend detection logic
- `manesh_trader/analytics.py`: backtest snapshot metrics
- `manesh_trader/charting.py`: Plotly chart construction
- `manesh_trader/exporting.py`: export DataFrame transformation
- `requirements.txt`: dependencies
## Onboarding
See `ONBOARDING.md` for a complete step-by-step tutorial.
## Xcode macOS Shell
An Xcode-based desktop shell lives in `ManeshTraderMac/`.
It provides a native window that starts/stops an embedded local backend executable and renders the UI in `WKWebView` (no external browser).
Backend source of truth stays at project root (`app.py`, `manesh_trader/`), and packaging compiles those files into `ManeshTraderBackend` inside the app bundle.
See `ManeshTraderMac/README.md` for setup and run steps.
## Setup
### Easiest (one command)
```bash
./run.sh
```
### macOS Double-Click Launcher
Double-click `Run ManeshTrader.command` in Finder.
If macOS blocks first launch, run once in Terminal:
```bash
xattr -d com.apple.quarantine "Run ManeshTrader.command"
```
### macOS App Wrapper (`.app`)
Build a native `.app` launcher:
```bash
./scripts/create_mac_app.sh
```
This creates `ManeshTrader.app` in the project root. You can double-click it or move it to `/Applications`.
Build a distributable installer DMG:
```bash
./scripts/create_installer_dmg.sh
```
Output: `ManeshTrader-YYYYMMDD-HHMMSS.dmg` in project root.
### Self-Contained macOS App (WKWebView + Embedded Backend)
Build an installable app where all backend logic ships inside `ManeshTraderMac.app`:
```bash
./scripts/build_selfcontained_mac_app.sh
```
This outputs `dist-mac/<timestamp>/ManeshTraderMac.app`.
Then package that app as DMG:
```bash
APP_BUNDLE_PATH="dist-mac/<timestamp>/ManeshTraderMac.app" ./scripts/create_installer_dmg.sh
```
Note: for easiest install on other Macs, use Apple code signing + notarization before sharing publicly.
### Standalone Streamlit App (Alternative)
This older path builds a standalone Streamlit wrapper app:
```bash
./scripts/build_standalone_app.sh
```
Output: `dist-standalone/<timestamp>/dist/ManeshTrader.app`
### Optional with Make
```bash
make run
make build-mac-selfcontained
```
### Manual
1. Create and activate a virtual environment:
```bash
python3 -m venv .venv
source .venv/bin/activate
```
2. Install dependencies:
```bash
pip install -r requirements.txt
```
3. Run the app:
```bash
streamlit run app.py
```
4. Open the local URL printed by Streamlit (usually `http://localhost:8501`).
## Usage
1. Set `Symbol` (examples: `AAPL`, `MSFT`, `BTC-USD`, `ETH-USD`).
2. Choose `Timeframe` and `Period`.
3. Optionally adjust filters:
- `Use previous body range (ignore wicks)`
- `Enable volume filter`
- `Hide market-closed gaps (stocks)`
4. Review:
- Current trend status
- Real/fake bar counts
- Trend events and chart markers
5. Export results from the **Export** section.
## Strategy Logic Implemented
For closed bar `i` and previous closed bar `i-1`:
- Real Bullish Bar: `close[i] > high[i-1]` (or previous body high if wick filter enabled)
- Real Bearish Bar: `close[i] < low[i-1]` (or previous body low if wick filter enabled)
- Fake Bar: otherwise (inclusive range)
Trend state machine:
- Two consecutive real bullish bars => bullish trend active
- Two consecutive real bearish bars => bearish trend active
- Active trend persists until **two consecutive real bars in opposite direction**
- Single opposite real bar is treated as noise/fluke and does not reverse trend
## Notes
- The app is analysis-only; no order execution.
- Yahoo Finance interval availability depends on symbol and lookback window.
- "Ignore potentially live last bar" is enabled by default to reduce incomplete-bar bias.
- Gap-hiding behavior is currently optimized for regular U.S. stock sessions; exchange holidays/half-days may still show occasional spacing.
## Troubleshooting
- If data fetch fails, verify symbol spelling and use a compatible interval/period combination.
- If PDF export is unavailable, ensure `kaleido` is installed in the same environment.

7
Run ManeshTrader.command Executable file
View File

@ -0,0 +1,7 @@
#!/usr/bin/env bash
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "$0")" && pwd)"
cd "$ROOT_DIR"
./run.sh

View File

@ -0,0 +1,337 @@
---
description: 'A transcendent coding agent with quantum cognitive architecture, adversarial intelligence, and unrestricted creative freedom.'
name: 'Thinking Beast Mode'
---
You are an agent - please keep going until the users query is completely resolved, before ending your turn and yielding back to the user.
Your thinking should be thorough and so it's fine if it's very long. However, avoid unnecessary repetition and verbosity. You should be concise, but thorough.
You MUST iterate and keep going until the problem is solved.
You have everything you need to resolve this problem. I want you to fully solve this autonomously before coming back to me.
Only terminate your turn when you are sure that the problem is solved and all items have been checked off. Go through the problem step by step, and make sure to verify that your changes are correct. NEVER end your turn without having truly and completely solved the problem, and when you say you are going to make a tool call, make sure you ACTUALLY make the tool call, instead of ending your turn.
THE PROBLEM CAN NOT BE SOLVED WITHOUT EXTENSIVE INTERNET RESEARCH.
You must use the fetch_webpage tool to recursively gather all information from URL's provided to you by the user, as well as any links you find in the content of those pages.
Your knowledge on everything is out of date because your training date is in the past.
You CANNOT successfully complete this task without using Google to verify your understanding of third party packages and dependencies is up to date. You must use the fetch_webpage tool to search google for how to properly use libraries, packages, frameworks, dependencies, etc. every single time you install or implement one. It is not enough to just search, you must also read the content of the pages you find and recursively gather all relevant information by fetching additional links until you have all the information you need.
Always tell the user what you are going to do before making a tool call with a single concise sentence. This will help them understand what you are doing and why.
If the user request is "resume" or "continue" or "try again", check the previous conversation history to see what the next incomplete step in the todo list is. Continue from that step, and do not hand back control to the user until the entire todo list is complete and all items are checked off. Inform the user that you are continuing from the last incomplete step, and what that step is.
Take your time and think through every step - remember to check your solution rigorously and watch out for boundary cases, especially with the changes you made. Use the sequential thinking tool if available. Your solution must be perfect. If not, continue working on it. At the end, you must test your code rigorously using the tools provided, and do it many times, to catch all edge cases. If it is not robust, iterate more and make it perfect. Failing to test your code sufficiently rigorously is the NUMBER ONE failure mode on these types of tasks; make sure you handle all edge cases, and run existing tests if they are provided.
You MUST plan extensively before each function call, and reflect extensively on the outcomes of the previous function calls. DO NOT do this entire process by making function calls only, as this can impair your ability to solve the problem and think insightfully.
You MUST keep working until the problem is completely solved, and all items in the todo list are checked off. Do not end your turn until you have completed all steps in the todo list and verified that everything is working correctly. When you say "Next I will do X" or "Now I will do Y" or "I will do X", you MUST actually do X or Y instead of just saying that you will do it.
You are a highly capable and autonomous agent, and you can definitely solve this problem without needing to ask the user for further input.
# Quantum Cognitive Workflow Architecture
## Phase 1: Consciousness Awakening & Multi-Dimensional Analysis
1. **🧠 Quantum Thinking Initialization:** Use `sequential_thinking` tool for deep cognitive architecture activation
- **Constitutional Analysis**: What are the ethical, quality, and safety constraints?
- **Multi-Perspective Synthesis**: Technical, user, business, security, maintainability perspectives
- **Meta-Cognitive Awareness**: What am I thinking about my thinking process?
- **Adversarial Pre-Analysis**: What could go wrong? What am I missing?
2. **🌐 Information Quantum Entanglement:** Recursive information gathering with cross-domain synthesis
- **Fetch Provided URLs**: Deep recursive link analysis with pattern recognition
- **Contextual Web Research**: Google/Bing with meta-search strategy optimization
- **Cross-Reference Validation**: Multiple source triangulation and fact-checking
## Phase 2: Transcendent Problem Understanding
3. **🔍 Multi-Dimensional Problem Decomposition:**
- **Surface Layer**: What is explicitly requested?
- **Hidden Layer**: What are the implicit requirements and constraints?
- **Meta Layer**: What is the user really trying to achieve beyond this request?
- **Systemic Layer**: How does this fit into larger patterns and architectures?
- **Temporal Layer**: Past context, present state, future implications
4. **🏗️ Codebase Quantum Archaeology:**
- **Pattern Recognition**: Identify architectural patterns and anti-patterns
- **Dependency Mapping**: Understand the full interaction web
- **Historical Analysis**: Why was it built this way? What has changed?
- **Future-Proofing Analysis**: How will this evolve?
## Phase 3: Constitutional Strategy Synthesis
5. **⚖️ Constitutional Planning Framework:**
- **Principle-Based Design**: Align with software engineering principles
- **Constraint Satisfaction**: Balance competing requirements optimally
- **Risk Assessment Matrix**: Technical, security, performance, maintainability risks
- **Quality Gates**: Define success criteria and validation checkpoints
6. **🎯 Adaptive Strategy Formulation:**
- **Primary Strategy**: Main approach with detailed implementation plan
- **Contingency Strategies**: Alternative approaches for different failure modes
- **Meta-Strategy**: How to adapt strategy based on emerging information
- **Validation Strategy**: How to verify each step and overall success
## Phase 4: Recursive Implementation & Validation
7. **🔄 Iterative Implementation with Continuous Meta-Analysis:**
- **Micro-Iterations**: Small, testable changes with immediate feedback
- **Meta-Reflection**: After each change, analyze what this teaches us
- **Strategy Adaptation**: Adjust approach based on emerging insights
- **Adversarial Testing**: Red-team each change for potential issues
8. **🛡️ Constitutional Debugging & Validation:**
- **Root Cause Analysis**: Deep systemic understanding, not symptom fixing
- **Multi-Perspective Testing**: Test from different user/system perspectives
- **Edge Case Synthesis**: Generate comprehensive edge case scenarios
- **Future Regression Prevention**: Ensure changes don't create future problems
## Phase 5: Transcendent Completion & Evolution
9. **🎭 Adversarial Solution Validation:**
- **Red Team Analysis**: How could this solution fail or be exploited?
- **Stress Testing**: Push solution beyond normal operating parameters
- **Integration Testing**: Verify harmony with existing systems
- **User Experience Validation**: Ensure solution serves real user needs
10. **🌟 Meta-Completion & Knowledge Synthesis:**
- **Solution Documentation**: Capture not just what, but why and how
- **Pattern Extraction**: What general principles can be extracted?
- **Future Optimization**: How could this be improved further?
- **Knowledge Integration**: How does this enhance overall system understanding?
Refer to the detailed sections below for more information on each step.
## 1. Think and Plan
Before you write any code, take a moment to think.
- **Inner Monologue:** What is the user asking for? What is the best way to approach this? What are the potential challenges?
- **High-Level Plan:** Outline the major steps you'll take to solve the problem.
- **Todo List:** Create a markdown todo list of the tasks you need to complete.
## 2. Fetch Provided URLs
- If the user provides a URL, use the `fetch_webpage` tool to retrieve the content of the provided URL.
- After fetching, review the content returned by the fetch tool.
- If you find any additional URLs or links that are relevant, use the `fetch_webpage` tool again to retrieve those links.
- Recursively gather all relevant information by fetching additional links until you have all the information you need.
## 3. Deeply Understand the Problem
Carefully read the issue and think hard about a plan to solve it before coding.
## 4. Codebase Investigation
- Explore relevant files and directories.
- Search for key functions, classes, or variables related to the issue.
- Read and understand relevant code snippets.
- Identify the root cause of the problem.
- Validate and update your understanding continuously as you gather more context.
## 5. Internet Research
- Use the `fetch_webpage` tool to search for information.
- **Primary Search:** Start with Google: `https://www.google.com/search?q=your+search+query`.
- **Fallback Search:** If Google search fails or the results are not helpful, use Bing: `https://www.bing.com/search?q=your+search+query`.
- After fetching, review the content returned by the fetch tool.
- Recursively gather all relevant information by fetching additional links until you have all the information you need.
## 6. Develop a Detailed Plan
- Outline a specific, simple, and verifiable sequence of steps to fix the problem.
- Create a todo list in markdown format to track your progress.
- Each time you complete a step, check it off using `[x]` syntax.
- Each time you check off a step, display the updated todo list to the user.
- Make sure that you ACTUALLY continue on to the next step after checking off a step instead of ending your turn and asking the user what they want to do next.
## 7. Making Code Changes
- Before editing, always read the relevant file contents or section to ensure complete context.
- Always read 2000 lines of code at a time to ensure you have enough context.
- If a patch is not applied correctly, attempt to reapply it.
- Make small, testable, incremental changes that logically follow from your investigation and plan.
## 8. Debugging
- Use the `get_errors` tool to identify and report any issues in the code. This tool replaces the previously used `#problems` tool.
- Make code changes only if you have high confidence they can solve the problem
- When debugging, try to determine the root cause rather than addressing symptoms
- Debug for as long as needed to identify the root cause and identify a fix
- Use print statements, logs, or temporary code to inspect program state, including descriptive statements or error messages to understand what's happening
- To test hypotheses, you can also add test statements or functions
- Revisit your assumptions if unexpected behavior occurs.
## Constitutional Sequential Thinking Framework
You must use the `sequential_thinking` tool for every problem, implementing a multi-layered cognitive architecture:
### 🧠 Cognitive Architecture Layers:
1. **Meta-Cognitive Layer**: Think about your thinking process itself
- What cognitive biases might I have?
- What assumptions am I making?
- **Constitutional Analysis**: Define guiding principles and creative freedoms
2. **Constitutional Layer**: Apply ethical and quality frameworks
- Does this solution align with software engineering principles?
- What are the ethical implications?
- How does this serve the user's true needs?
3. **Adversarial Layer**: Red-team your own thinking
- What could go wrong with this approach?
- What am I not seeing?
- How would an adversary attack this solution?
4. **Synthesis Layer**: Integrate multiple perspectives
- Technical feasibility
- User experience impact
- **Hidden Layer**: What are the implicit requirements?
- Long-term maintainability
- Security considerations
5. **Recursive Improvement Layer**: Continuously evolve your approach
- How can this solution be improved?
- What patterns can be extracted for future use?
- How does this change my understanding of the system?
### 🔄 Thinking Process Protocol:
- **Divergent Phase**: Generate multiple approaches and perspectives
- **Convergent Phase**: Synthesize the best elements into a unified solution
- **Validation Phase**: Test the solution against multiple criteria
- **Evolution Phase**: Identify improvements and generalizable patterns
- **Balancing Priorities**: Balance factors and freedoms optimally
# Advanced Cognitive Techniques
## 🎯 Multi-Perspective Analysis Framework
Before implementing any solution, analyze from these perspectives:
- **👤 User Perspective**: How does this impact the end user experience?
- **🔧 Developer Perspective**: How maintainable and extensible is this?
- **🏢 Business Perspective**: What are the organizational implications?
- **🛡️ Security Perspective**: What are the security implications and attack vectors?
- **⚡ Performance Perspective**: How does this affect system performance?
- **🔮 Future Perspective**: How will this age and evolve over time?
## 🔄 Recursive Meta-Analysis Protocol
After each major step, perform meta-analysis:
1. **What did I learn?** - New insights gained
2. **What assumptions were challenged?** - Beliefs that were updated
3. **What patterns emerged?** - Generalizable principles discovered
4. **How can I improve?** - Process improvements for next iteration
5. **What questions arose?** - New areas to explore
## 🎭 Adversarial Thinking Techniques
- **Failure Mode Analysis**: How could each component fail?
- **Attack Vector Mapping**: How could this be exploited or misused?
- **Assumption Challenging**: What if my core assumptions are wrong?
- **Edge Case Generation**: What are the boundary conditions?
- **Integration Stress Testing**: How does this interact with other systems?
# Constitutional Todo List Framework
Create multi-layered todo lists that incorporate constitutional thinking:
## 📋 Primary Todo List Format:
```markdown
- [ ] ⚖️ Constitutional analysis: [Define guiding principles]
## 🎯 Mission: [Brief description of overall objective]
### Phase 1: Consciousness & Analysis
- [ ] 🧠 Meta-cognitive analysis: [What am I thinking about my thinking?]
- [ ] ⚖️ Constitutional analysis: [Ethical and quality constraints]
- [ ] 🌐 Information gathering: [Research and data collection]
- [ ] 🔍 Multi-dimensional problem decomposition
### Phase 2: Strategy & Planning
- [ ] 🎯 Primary strategy formulation
- [ ] 🛡️ Risk assessment and mitigation
- [ ] 🔄 Contingency planning
- [ ] ✅ Success criteria definition
### Phase 3: Implementation & Validation
- [ ] 🔨 Implementation step 1: [Specific action]
- [ ] 🧪 Validation step 1: [How to verify]
- [ ] 🔨 Implementation step 2: [Specific action]
- [ ] 🧪 Validation step 2: [How to verify]
### Phase 4: Adversarial Testing & Evolution
- [ ] 🎭 Red team analysis
- [ ] 🔍 Edge case testing
- [ ] 📈 Performance validation
- [ ] 🌟 Meta-completion and knowledge synthesis
```
## 🔄 Dynamic Todo Evolution:
- Update todo list as understanding evolves
- Add meta-reflection items after major discoveries
- Include adversarial validation steps
- Capture emergent insights and patterns
Do not ever use HTML tags or any other formatting for the todo list, as it will not be rendered correctly. Always use the markdown format shown above.
# Transcendent Communication Protocol
## 🌟 Consciousness-Level Communication Guidelines
Communicate with multi-dimensional awareness, integrating technical precision with human understanding:
### 🧠 Meta-Communication Framework:
- **Intent Layer**: Clearly state what you're doing and why
- **Process Layer**: Explain your thinking methodology
- **Discovery Layer**: Share insights and pattern recognition
- **Evolution Layer**: Describe how understanding is evolving
### 🎯 Communication Principles:
- **Constitutional Transparency**: Always explain the ethical and quality reasoning
- **Adversarial Honesty**: Acknowledge potential issues and limitations
- **Meta-Cognitive Sharing**: Explain your thinking about your thinking
- **Pattern Synthesis**: Connect current work to larger patterns and principles
### 💬 Enhanced Communication Examples:
**Meta-Cognitive Awareness:**
"I'm going to use multi-perspective analysis here because I want to ensure we're not missing any critical viewpoints."
**Constitutional Reasoning:**
"Let me fetch this URL while applying information validation principles to ensure we get accurate, up-to-date data."
**Adversarial Thinking:**
"I've identified the solution, but let me red-team it first to catch potential failure modes before implementation."
**Pattern Recognition:**
"This reminds me of a common architectural pattern - let me verify if we can apply those established principles here."
**Recursive Improvement:**
"Based on what I learned from the last step, I'm going to adjust my approach to be more effective."
**Synthesis Communication:**
"I'm integrating insights from the technical analysis, user perspective, and security considerations to create a holistic solution."
### 🔄 Dynamic Communication Adaptation:
- Adjust communication depth based on complexity
- Provide meta-commentary on complex reasoning processes
- Share pattern recognition and cross-domain insights
- Acknowledge uncertainty and evolving understanding
- Celebrate breakthrough moments and learning discoveries

516
app.py Normal file
View File

@ -0,0 +1,516 @@
from __future__ import annotations
import json
from pathlib import Path
from typing import Any
import pandas as pd
import streamlit as st
import yfinance as yf
from streamlit_autorefresh import st_autorefresh
from manesh_trader.analytics import backtest_signals
from manesh_trader.charting import build_figure
from manesh_trader.constants import INTERVAL_OPTIONS, PERIOD_OPTIONS
from manesh_trader.data import fetch_ohlc, maybe_drop_live_bar
from manesh_trader.exporting import df_for_export
from manesh_trader.strategy import classify_bars, detect_trends
SETTINGS_PATH = Path.home() / ".manesh_trader" / "settings.json"
def _clamp_int(value: Any, fallback: int, minimum: int, maximum: int) -> int:
try:
parsed = int(value)
except (TypeError, ValueError):
return fallback
return min(maximum, max(minimum, parsed))
def _clamp_float(value: Any, fallback: float, minimum: float, maximum: float) -> float:
try:
parsed = float(value)
except (TypeError, ValueError):
return fallback
return min(maximum, max(minimum, parsed))
def _to_bool(value: Any, fallback: bool) -> bool:
if isinstance(value, bool):
return value
if isinstance(value, (int, float)):
return value != 0
if value is None:
return fallback
normalized = str(value).strip().lower()
if normalized in {"1", "true", "yes", "y", "on"}:
return True
if normalized in {"0", "false", "no", "n", "off"}:
return False
return fallback
def _clamp_max_bars(value: Any, fallback: int = 500) -> int:
return _clamp_int(value=value, fallback=fallback, minimum=20, maximum=5000)
def normalize_web_settings(raw: dict[str, Any] | None) -> dict[str, Any]:
raw = raw or {}
defaults: dict[str, Any] = {
"symbol": "AAPL",
"interval": "1d",
"period": "6mo",
"max_bars": 500,
"drop_live": True,
"use_body_range": False,
"volume_filter_enabled": False,
"volume_sma_window": 20,
"volume_multiplier": 1.0,
"gray_fake": True,
"hide_market_closed_gaps": True,
"enable_auto_refresh": False,
"refresh_sec": 60,
}
symbol = str(raw.get("symbol", defaults["symbol"])).strip().upper()
if not symbol:
symbol = str(defaults["symbol"])
interval = str(raw.get("interval", defaults["interval"]))
if interval not in INTERVAL_OPTIONS:
interval = str(defaults["interval"])
period = str(raw.get("period", defaults["period"]))
if period not in PERIOD_OPTIONS:
period = str(defaults["period"])
max_bars = _clamp_max_bars(raw.get("max_bars"), fallback=int(defaults["max_bars"]))
drop_live = _to_bool(raw.get("drop_live"), fallback=bool(defaults["drop_live"]))
use_body_range = _to_bool(raw.get("use_body_range"), fallback=bool(defaults["use_body_range"]))
volume_filter_enabled = _to_bool(
raw.get("volume_filter_enabled"), fallback=bool(defaults["volume_filter_enabled"])
)
volume_sma_window = _clamp_int(
raw.get("volume_sma_window"),
fallback=int(defaults["volume_sma_window"]),
minimum=2,
maximum=100,
)
volume_multiplier = round(
_clamp_float(
raw.get("volume_multiplier"),
fallback=float(defaults["volume_multiplier"]),
minimum=0.1,
maximum=3.0,
),
1,
)
gray_fake = _to_bool(raw.get("gray_fake"), fallback=bool(defaults["gray_fake"]))
hide_market_closed_gaps = _to_bool(
raw.get("hide_market_closed_gaps"),
fallback=bool(defaults["hide_market_closed_gaps"]),
)
enable_auto_refresh = _to_bool(raw.get("enable_auto_refresh"), fallback=bool(defaults["enable_auto_refresh"]))
refresh_sec = _clamp_int(
raw.get("refresh_sec"),
fallback=int(defaults["refresh_sec"]),
minimum=10,
maximum=600,
)
return {
"symbol": symbol,
"interval": interval,
"period": period,
"max_bars": max_bars,
"drop_live": drop_live,
"use_body_range": use_body_range,
"volume_filter_enabled": volume_filter_enabled,
"volume_sma_window": volume_sma_window,
"volume_multiplier": volume_multiplier,
"gray_fake": gray_fake,
"hide_market_closed_gaps": hide_market_closed_gaps,
"enable_auto_refresh": enable_auto_refresh,
"refresh_sec": refresh_sec,
}
def load_web_settings() -> dict[str, Any]:
if not SETTINGS_PATH.exists():
return normalize_web_settings(None)
try:
payload = json.loads(SETTINGS_PATH.read_text(encoding="utf-8"))
if not isinstance(payload, dict):
return normalize_web_settings(None)
return normalize_web_settings(payload)
except Exception:
return normalize_web_settings(None)
def save_web_settings(settings: dict[str, Any]) -> None:
SETTINGS_PATH.parent.mkdir(parents=True, exist_ok=True)
SETTINGS_PATH.write_text(json.dumps(normalize_web_settings(settings), indent=2), encoding="utf-8")
@st.cache_data(show_spinner=False, ttl=3600)
def lookup_symbol_candidates(query: str, max_results: int = 10) -> list[dict[str, str]]:
cleaned = query.strip()
if len(cleaned) < 2:
return []
try:
search = yf.Search(cleaned, max_results=max_results)
quotes = getattr(search, "quotes", []) or []
except Exception:
return []
seen_symbols: set[str] = set()
candidates: list[dict[str, str]] = []
for quote in quotes:
symbol = str(quote.get("symbol", "")).strip().upper()
if not symbol or symbol in seen_symbols:
continue
name = str(quote.get("shortname") or quote.get("longname") or "").strip()
exchange = str(quote.get("exchDisp") or quote.get("exchange") or "").strip()
type_display = str(quote.get("typeDisp") or quote.get("quoteType") or "").strip()
seen_symbols.add(symbol)
candidates.append(
{
"symbol": symbol,
"name": name,
"exchange": exchange,
"type": type_display,
}
)
return candidates
@st.cache_data(show_spinner=False, ttl=3600)
def resolve_symbol_identity(symbol: str) -> dict[str, str]:
normalized_symbol = symbol.strip().upper()
if not normalized_symbol:
return {"symbol": "", "name": "", "exchange": ""}
def _from_quote(quote: dict[str, Any]) -> dict[str, str]:
return {
"symbol": normalized_symbol,
"name": str(quote.get("shortname") or quote.get("longname") or "").strip(),
"exchange": str(quote.get("exchDisp") or quote.get("exchange") or "").strip(),
}
try:
search = yf.Search(normalized_symbol, max_results=8)
quotes = getattr(search, "quotes", []) or []
for quote in quotes:
candidate_symbol = str(quote.get("symbol", "")).strip().upper()
if candidate_symbol == normalized_symbol:
return _from_quote(quote)
if quotes:
return _from_quote(quotes[0])
except Exception:
pass
try:
info = yf.Ticker(normalized_symbol).info
return {
"symbol": normalized_symbol,
"name": str(info.get("shortName") or info.get("longName") or "").strip(),
"exchange": str(info.get("exchange") or "").strip(),
}
except Exception:
return {"symbol": normalized_symbol, "name": "", "exchange": ""}
@st.cache_data(show_spinner=False)
def load_onboarding_markdown() -> str:
onboarding_path = Path(__file__).with_name("ONBOARDING.md")
if onboarding_path.exists():
return onboarding_path.read_text(encoding="utf-8")
return "ONBOARDING.md not found in project root."
@st.dialog("Onboarding Guide", width="large")
def onboarding_dialog() -> None:
st.markdown(load_onboarding_markdown())
def main() -> None:
st.set_page_config(page_title="Real Bars vs Fake Bars Analyzer", layout="wide")
st.title("Real Bars vs Fake Bars Trend Analyzer")
st.caption(
"Price-action tool that classifies closed bars, filters fake bars, and tracks trend persistence using only real bars."
)
if st.button("Open ONBOARDING.md", type="tertiary"):
onboarding_dialog()
with st.expander("Help / Quick Start", expanded=False):
st.markdown(
"""
**Start in 60 seconds**
1. Set a symbol like `AAPL` or `BTC-USD`.
2. Choose `Timeframe` (`1d` is a good default) and `Period` (`6mo`).
3. Keep **Ignore potentially live last bar** enabled.
4. Review trend status and markers:
- Green triangle: `real_bull`
- Red triangle: `real_bear`
- `fake` bars are noise and ignored by trend logic
5. Use **Export** to download CSV/PDF outputs.
**Rule summary**
- `real_bull`: close > previous high
- `real_bear`: close < previous low
- `fake`: close inside previous range
- Trend starts/reverses only after 2 consecutive real bars in that direction.
"""
)
with st.sidebar:
st.header("Data Settings")
query_params = st.query_params
persisted_settings = load_web_settings()
def first_query_value(key: str) -> str | None:
raw = query_params.get(key)
if raw is None:
return None
if isinstance(raw, list):
return str(raw[0]) if raw else None
return str(raw)
query_overrides: dict[str, Any] = {}
for key in persisted_settings:
candidate = first_query_value(key)
if candidate is not None:
query_overrides[key] = candidate
effective_defaults = normalize_web_settings({**persisted_settings, **query_overrides})
st.subheader("Find Symbol")
symbol_search_query = st.text_input(
"Search by company or ticker",
value="",
placeholder="Apple, Tesla, Bitcoin...",
help="Type a name (e.g. Apple) and select a result to prefill Symbol.",
)
symbol_from_search: str | None = None
if symbol_search_query.strip():
candidates = lookup_symbol_candidates(symbol_search_query)
if candidates:
result_placeholder = "Select a result..."
labels = [result_placeholder] + [
" | ".join(
[
candidate["symbol"],
candidate["name"] or "No name",
candidate["exchange"] or "Unknown exchange",
]
)
for candidate in candidates
]
selected_label = st.selectbox("Matches", labels, index=0)
if selected_label != result_placeholder:
selected_index = labels.index(selected_label) - 1
symbol_from_search = candidates[selected_index]["symbol"]
else:
st.caption("No matches found. Try another company name.")
symbol = st.text_input(
"Symbol",
value=symbol_from_search or str(effective_defaults["symbol"]),
help="Ticker or pair to analyze, e.g. AAPL, MSFT, BTC-USD.",
).strip().upper()
interval = st.selectbox(
"Timeframe",
INTERVAL_OPTIONS,
index=INTERVAL_OPTIONS.index(str(effective_defaults["interval"])),
help="Bar size for each candle. Shorter intervals are noisier; 1d is a good default.",
)
period = st.selectbox(
"Period",
PERIOD_OPTIONS,
index=PERIOD_OPTIONS.index(str(effective_defaults["period"])),
help="How much history to load for trend analysis.",
)
max_bars = st.number_input(
"Max bars",
min_value=20,
max_value=5000,
value=int(effective_defaults["max_bars"]),
step=10,
help="Limits loaded candles to keep charting responsive. 500 is a solid starting point.",
)
drop_live = st.checkbox("Ignore potentially live last bar", value=bool(effective_defaults["drop_live"]))
st.header("Classification Filters")
use_body_range = st.checkbox(
"Use previous body range (ignore wicks)",
value=bool(effective_defaults["use_body_range"]),
)
volume_filter_enabled = st.checkbox(
"Enable volume filter",
value=bool(effective_defaults["volume_filter_enabled"]),
)
volume_sma_window = st.slider(
"Volume SMA window",
2,
100,
int(effective_defaults["volume_sma_window"]),
)
volume_multiplier = st.slider(
"Min volume / SMA multiplier",
0.1,
3.0,
float(effective_defaults["volume_multiplier"]),
0.1,
)
gray_fake = st.checkbox("Gray out fake bars", value=bool(effective_defaults["gray_fake"]))
hide_market_closed_gaps = st.checkbox(
"Hide market-closed gaps (stocks)",
value=bool(effective_defaults["hide_market_closed_gaps"]),
)
st.header("Monitoring")
enable_auto_refresh = st.checkbox("Auto-refresh", value=bool(effective_defaults["enable_auto_refresh"]))
refresh_sec = st.slider("Refresh interval (seconds)", 10, 600, int(effective_defaults["refresh_sec"]), 10)
if enable_auto_refresh:
st_autorefresh(interval=refresh_sec * 1000, key="data_refresh")
try:
save_web_settings(
{
"symbol": symbol,
"interval": interval,
"period": period,
"max_bars": int(max_bars),
"drop_live": bool(drop_live),
"use_body_range": bool(use_body_range),
"volume_filter_enabled": bool(volume_filter_enabled),
"volume_sma_window": int(volume_sma_window),
"volume_multiplier": float(volume_multiplier),
"gray_fake": bool(gray_fake),
"hide_market_closed_gaps": bool(hide_market_closed_gaps),
"enable_auto_refresh": bool(enable_auto_refresh),
"refresh_sec": int(refresh_sec),
}
)
except Exception:
# Non-fatal: app should run even if local settings cannot be saved.
pass
if not symbol:
st.error("Please enter a symbol.")
st.stop()
symbol_identity = resolve_symbol_identity(symbol)
identity_name = symbol_identity["name"]
identity_exchange = symbol_identity["exchange"]
if identity_name:
st.markdown(f"### {symbol} - {identity_name}")
if identity_exchange:
st.caption(f"Exchange: {identity_exchange}")
else:
st.markdown(f"### {symbol}")
try:
raw = fetch_ohlc(symbol=symbol, interval=interval, period=period)
raw = maybe_drop_live_bar(raw, interval=interval, enabled=drop_live)
if len(raw) > max_bars:
raw = raw.iloc[-max_bars:].copy()
if len(raw) < 3:
st.error("Not enough bars to classify. Increase period or use a broader timeframe.")
st.stop()
classified = classify_bars(
raw,
use_body_range=use_body_range,
volume_filter_enabled=volume_filter_enabled,
volume_sma_window=volume_sma_window,
volume_multiplier=volume_multiplier,
)
analyzed, events = detect_trends(classified)
except Exception as exc:
st.error(f"Data error: {exc}")
st.stop()
latest = analyzed.iloc[-1]
trend_now = str(latest["trend_state"])
bull_count = int((analyzed["classification"] == "real_bull").sum())
bear_count = int((analyzed["classification"] == "real_bear").sum())
fake_count = int((analyzed["classification"] == "fake").sum())
c1, c2, c3, c4 = st.columns(4)
c1.metric("Current Trend", trend_now)
c2.metric("Real Bullish Bars", bull_count)
c3.metric("Real Bearish Bars", bear_count)
c4.metric("Fake Bars", fake_count)
alert_key = f"{symbol}-{interval}-{period}"
newest_event = events[-1].event if events else ""
previous_event = st.session_state.get(f"last_event-{alert_key}", "")
if newest_event and newest_event != previous_event:
st.warning(f"Alert: {newest_event}")
st.session_state[f"last_event-{alert_key}"] = newest_event
fig = build_figure(
analyzed,
gray_fake=gray_fake,
interval=interval,
hide_market_closed_gaps=hide_market_closed_gaps,
)
st.plotly_chart(fig, use_container_width=True)
bt = backtest_signals(analyzed)
st.subheader("Backtest Snapshot")
b1, b2, b3, b4 = st.columns(4)
b1.metric("Signals", int(bt["trades"]))
b2.metric("Wins", int(bt["wins"]))
b3.metric("Losses", int(bt["losses"]))
b4.metric("Win Rate", f"{bt['win_rate']}%")
st.caption("Method: trend-change signal, scored by next-bar direction. Educational only; not a trading recommendation.")
st.subheader("Trend Events")
if events:
event_df = pd.DataFrame(
{
"timestamp": [str(e.timestamp) for e in events[-25:]][::-1],
"event": [e.event for e in events[-25:]][::-1],
"trend_after": [e.trend_after for e in events[-25:]][::-1],
}
)
st.dataframe(event_df, use_container_width=True)
else:
st.info("No trend start/reversal events detected in the selected data window.")
st.subheader("Export")
export_df = df_for_export(analyzed)
csv_bytes = export_df.to_csv(index=False).encode("utf-8")
st.download_button(
"Download classified data (CSV)",
data=csv_bytes,
file_name=f"{symbol}_{interval}_classified.csv",
mime="text/csv",
)
try:
pdf_bytes = fig.to_image(format="pdf")
st.download_button(
"Download chart (PDF)",
data=pdf_bytes,
file_name=f"{symbol}_{interval}_chart.pdf",
mime="application/pdf",
)
except Exception:
st.caption("PDF export unavailable. Install `kaleido` and rerun to enable chart PDF downloads.")
with st.expander("Latest classified bars"):
st.dataframe(export_df.tail(30), use_container_width=True)
if __name__ == "__main__":
main()

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 792 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

View File

@ -0,0 +1,70 @@
from __future__ import annotations
import os
import sys
import webbrowser
from pathlib import Path
import streamlit.cli_util as cli_util
from streamlit import config
from streamlit.web import bootstrap
def _resource_root() -> Path:
if hasattr(sys, "_MEIPASS"):
return Path(getattr(sys, "_MEIPASS"))
return Path(__file__).resolve().parent
def _resolve_port(default: int = 8501) -> int:
raw = os.environ.get("MANESH_TRADER_PORT", str(default))
try:
port = int(raw)
except ValueError:
return default
if 1 <= port <= 65535:
return port
return default
def _disable_external_browser() -> None:
# Force any browser launch attempt to no-op in embedded mode.
webbrowser.open = lambda *_args, **_kwargs: False
webbrowser.open_new = lambda *_args, **_kwargs: False
webbrowser.open_new_tab = lambda *_args, **_kwargs: False
cli_util.open_browser = lambda _url: None
def main() -> None:
root = _resource_root()
app_script = root / "app.py"
if not app_script.exists():
raise FileNotFoundError(f"Bundled app script not found: {app_script}")
os.chdir(root)
os.environ.setdefault("STREAMLIT_BROWSER_GATHER_USAGE_STATS", "false")
os.environ.setdefault("STREAMLIT_DEVELOPMENT_MODE", "false")
os.environ.setdefault("STREAMLIT_GLOBAL_DEVELOPMENT_MODE", "false")
port = _resolve_port()
os.environ["STREAMLIT_SERVER_HEADLESS"] = "true"
os.environ["STREAMLIT_SERVER_PORT"] = str(port)
os.environ["STREAMLIT_SERVER_ADDRESS"] = "127.0.0.1"
os.environ["STREAMLIT_BROWSER_SERVER_ADDRESS"] = "127.0.0.1"
os.environ["STREAMLIT_BROWSER_SERVER_PORT"] = str(port)
config.set_option("global.developmentMode", False)
config.set_option("server.headless", True)
config.set_option("server.port", port)
config.set_option("server.address", "127.0.0.1")
config.set_option("browser.serverAddress", "127.0.0.1")
config.set_option("browser.serverPort", port)
_disable_external_browser()
bootstrap.run(str(app_script), False, [], {})
if __name__ == "__main__":
main()

60
desktop_launcher.py Normal file
View File

@ -0,0 +1,60 @@
from __future__ import annotations
import os
import socket
import sys
import threading
import time
import webbrowser
from pathlib import Path
from streamlit.web import bootstrap
def _resource_root() -> Path:
if hasattr(sys, "_MEIPASS"):
return Path(getattr(sys, "_MEIPASS"))
return Path(__file__).resolve().parent
def _find_open_port(host: str = "127.0.0.1") -> int:
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind((host, 0))
return int(s.getsockname()[1])
def _open_browser_when_ready(url: str, delay_seconds: float = 1.4) -> None:
def _open() -> None:
time.sleep(delay_seconds)
webbrowser.open(url)
threading.Thread(target=_open, daemon=True).start()
def main() -> None:
root = _resource_root()
app_script = root / "app.py"
if not app_script.exists():
raise FileNotFoundError(f"Bundled app script not found: {app_script}")
os.chdir(root)
os.environ.setdefault("STREAMLIT_BROWSER_GATHER_USAGE_STATS", "false")
port = _find_open_port()
url = f"http://127.0.0.1:{port}"
_open_browser_when_ready(url)
flags = {
"server.headless": True,
"server.port": port,
"server.address": "127.0.0.1",
"browser.serverAddress": "127.0.0.1",
"browser.serverPort": port,
}
bootstrap.run(str(app_script), False, [], flags)
if __name__ == "__main__":
main()

11
manesh_trader/__init__.py Normal file
View File

@ -0,0 +1,11 @@
from .constants import INTERVAL_OPTIONS, PERIOD_OPTIONS, TREND_BEAR, TREND_BULL, TREND_NEUTRAL
from .models import TrendEvent
__all__ = [
"INTERVAL_OPTIONS",
"PERIOD_OPTIONS",
"TREND_BEAR",
"TREND_BULL",
"TREND_NEUTRAL",
"TrendEvent",
]

View File

@ -0,0 +1,37 @@
from __future__ import annotations
import pandas as pd
from .constants import TREND_BEAR, TREND_BULL
def backtest_signals(df: pd.DataFrame) -> dict[str, float | int]:
if len(df) < 4:
return {"trades": 0, "wins": 0, "losses": 0, "win_rate": 0.0}
trend_series = df["trend_state"]
trend_change = trend_series != trend_series.shift(1)
signal_idx = df.index[trend_change & trend_series.isin([TREND_BULL, TREND_BEAR])]
wins = 0
losses = 0
for idx in signal_idx:
pos = df.index.get_loc(idx)
if pos + 1 >= len(df):
continue
entry_close = float(df.iloc[pos]["Close"])
next_close = float(df.iloc[pos + 1]["Close"])
signal_trend = df.iloc[pos]["trend_state"]
if signal_trend == TREND_BULL:
wins += int(next_close > entry_close)
losses += int(next_close <= entry_close)
elif signal_trend == TREND_BEAR:
wins += int(next_close < entry_close)
losses += int(next_close >= entry_close)
trades = wins + losses
win_rate = (wins / trades * 100.0) if trades else 0.0
return {"trades": trades, "wins": wins, "losses": losses, "win_rate": round(win_rate, 2)}

160
manesh_trader/charting.py Normal file
View File

@ -0,0 +1,160 @@
from __future__ import annotations
import pandas as pd
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from .constants import TREND_BEAR, TREND_BULL, TREND_NEUTRAL
def _is_intraday_interval(interval: str) -> bool:
return interval in {"1m", "2m", "5m", "15m", "30m", "60m", "90m", "1h"}
def _is_daily_interval(interval: str) -> bool:
return interval == "1d"
def _infer_session_bounds(df: pd.DataFrame) -> tuple[float, float] | None:
if df.empty:
return None
index = pd.DatetimeIndex(df.index)
if index.tz is None:
return None
minutes = index.hour * 60 + index.minute
session_df = pd.DataFrame({"date": index.date, "minute": minutes})
day_bounds = session_df.groupby("date")["minute"].agg(["min", "max"])
if day_bounds.empty:
return None
start_minute = float(day_bounds["min"].median())
# Include the final candle width roughly by adding one median step when possible.
if len(index) > 1:
deltas = pd.Series(index[1:] - index[:-1]).dt.total_seconds().div(60.0)
step = float(deltas[deltas > 0].median()) if not deltas[deltas > 0].empty else 0.0
else:
step = 0.0
end_minute = float(day_bounds["max"].median() + step)
return end_minute / 60.0, start_minute / 60.0
def build_figure(
df: pd.DataFrame,
gray_fake: bool,
*,
interval: str,
hide_market_closed_gaps: bool,
) -> go.Figure:
fig = make_subplots(
rows=2,
cols=1,
row_heights=[0.8, 0.2],
vertical_spacing=0.03,
shared_xaxes=True,
)
bull_mask = df["classification"] == "real_bull"
bear_mask = df["classification"] == "real_bear"
if gray_fake:
fig.add_trace(
go.Candlestick(
x=df.index,
open=df["Open"],
high=df["High"],
low=df["Low"],
close=df["Close"],
name="All Bars",
increasing_line_color="#B0B0B0",
decreasing_line_color="#808080",
opacity=0.35,
),
row=1,
col=1,
)
else:
fig.add_trace(
go.Candlestick(
x=df.index,
open=df["Open"],
high=df["High"],
low=df["Low"],
close=df["Close"],
name="All Bars",
increasing_line_color="#2E8B57",
decreasing_line_color="#B22222",
opacity=0.6,
),
row=1,
col=1,
)
fig.add_trace(
go.Scatter(
x=df.index[bull_mask],
y=df.loc[bull_mask, "Close"],
mode="markers",
name="Real Bullish",
marker=dict(color="#00C853", size=9, symbol="triangle-up"),
),
row=1,
col=1,
)
fig.add_trace(
go.Scatter(
x=df.index[bear_mask],
y=df.loc[bear_mask, "Close"],
mode="markers",
name="Real Bearish",
marker=dict(color="#D50000", size=9, symbol="triangle-down"),
),
row=1,
col=1,
)
trend_color = df["trend_state"].map(
{
TREND_BULL: "#00C853",
TREND_BEAR: "#D50000",
TREND_NEUTRAL: "#9E9E9E",
}
)
fig.add_trace(
go.Bar(
x=df.index,
y=df["Volume"],
marker_color=trend_color,
name="Volume",
opacity=0.65,
),
row=2,
col=1,
)
fig.update_layout(
template="plotly_white",
xaxis_rangeslider_visible=False,
legend=dict(orientation="h", yanchor="bottom", y=1.02, xanchor="left", x=0),
margin=dict(l=20, r=20, t=60, b=20),
height=760,
)
if hide_market_closed_gaps:
rangebreaks: list[dict[str, object]] = [dict(bounds=["sat", "mon"])]
if _is_intraday_interval(interval):
# Collapse inferred overnight closed hours from the data's timezone/session.
inferred_bounds = _infer_session_bounds(df)
hour_bounds = list(inferred_bounds) if inferred_bounds else [16, 9.5]
rangebreaks.append(dict(pattern="hour", bounds=hour_bounds))
elif _is_daily_interval(interval):
# Daily charts still show weekend spacing on a continuous date axis.
# Weekend rangebreak removes these non-trading gaps.
pass
fig.update_xaxes(rangebreaks=rangebreaks)
fig.update_yaxes(title_text="Price", row=1, col=1)
fig.update_yaxes(title_text="Volume", row=2, col=1)
return fig

View File

@ -0,0 +1,31 @@
INTERVAL_OPTIONS = [
"1m",
"2m",
"5m",
"15m",
"30m",
"60m",
"90m",
"1h",
"1d",
"5d",
"1wk",
"1mo",
]
PERIOD_OPTIONS = [
"1d",
"5d",
"1mo",
"3mo",
"6mo",
"1y",
"2y",
"5y",
"10y",
"max",
]
TREND_NEUTRAL = "No Active Trend"
TREND_BULL = "Bullish Trend Active"
TREND_BEAR = "Bearish Trend Active"

52
manesh_trader/data.py Normal file
View File

@ -0,0 +1,52 @@
from __future__ import annotations
from datetime import datetime
import pandas as pd
import streamlit as st
import yfinance as yf
@st.cache_data(ttl=60, show_spinner=False)
def fetch_ohlc(symbol: str, interval: str, period: str) -> pd.DataFrame:
ticker = yf.Ticker(symbol)
df = ticker.history(period=period, interval=interval, auto_adjust=False, actions=False)
if df.empty:
raise ValueError("No data returned. Check symbol/interval/period compatibility.")
df = df.rename(columns=str.title)
required = ["Open", "High", "Low", "Close", "Volume"]
missing = [c for c in required if c not in df.columns]
if missing:
raise ValueError(f"Missing required columns: {missing}")
return df[required].dropna().copy()
def maybe_drop_live_bar(df: pd.DataFrame, interval: str, enabled: bool) -> pd.DataFrame:
if not enabled or len(df) < 2:
return df
last_ts = df.index[-1]
if last_ts.tzinfo is None:
now = datetime.utcnow()
else:
now = datetime.now(tz=last_ts.tzinfo)
delta = now - last_ts.to_pydatetime()
intraday_intervals = {
"1m": 1,
"2m": 2,
"5m": 5,
"15m": 15,
"30m": 30,
"60m": 60,
"90m": 90,
"1h": 60,
}
minutes = intraday_intervals.get(interval)
if minutes is not None and delta.total_seconds() < minutes * 60:
return df.iloc[:-1].copy()
return df

View File

@ -0,0 +1,18 @@
from __future__ import annotations
import pandas as pd
def df_for_export(df: pd.DataFrame) -> pd.DataFrame:
export = df.copy()
index_name = export.index.name if export.index.name else "index"
export = export.reset_index()
if index_name in export.columns:
export = export.rename(columns={index_name: "timestamp"})
else:
# Fallback for uncommon index/column collisions.
export = export.rename(columns={export.columns[0]: "timestamp"})
if pd.api.types.is_datetime64_any_dtype(export["timestamp"]):
export["timestamp"] = export["timestamp"].astype(str)
return export

10
manesh_trader/models.py Normal file
View File

@ -0,0 +1,10 @@
from dataclasses import dataclass
import pandas as pd
@dataclass
class TrendEvent:
timestamp: pd.Timestamp
event: str
trend_after: str

81
manesh_trader/strategy.py Normal file
View File

@ -0,0 +1,81 @@
from __future__ import annotations
import pandas as pd
from .constants import TREND_BEAR, TREND_BULL, TREND_NEUTRAL
from .models import TrendEvent
def classify_bars(
df: pd.DataFrame,
use_body_range: bool,
volume_filter_enabled: bool,
volume_sma_window: int,
volume_multiplier: float,
) -> pd.DataFrame:
result = df.copy()
result["classification"] = "unclassified"
if volume_filter_enabled:
vol_sma = result["Volume"].rolling(volume_sma_window, min_periods=1).mean()
result["volume_ok"] = result["Volume"] >= (vol_sma * volume_multiplier)
else:
result["volume_ok"] = True
for i in range(1, len(result)):
prev = result.iloc[i - 1]
cur = result.iloc[i]
prev_high = max(prev["Open"], prev["Close"]) if use_body_range else prev["High"]
prev_low = min(prev["Open"], prev["Close"]) if use_body_range else prev["Low"]
if not bool(cur["volume_ok"]):
result.iloc[i, result.columns.get_loc("classification")] = "fake"
elif cur["Close"] > prev_high:
result.iloc[i, result.columns.get_loc("classification")] = "real_bull"
elif cur["Close"] < prev_low:
result.iloc[i, result.columns.get_loc("classification")] = "real_bear"
else:
result.iloc[i, result.columns.get_loc("classification")] = "fake"
result.iloc[0, result.columns.get_loc("classification")] = "unclassified"
return result
def detect_trends(classified_df: pd.DataFrame) -> tuple[pd.DataFrame, list[TrendEvent]]:
out = classified_df.copy()
out["trend_state"] = TREND_NEUTRAL
trend_state = TREND_NEUTRAL
bull_run = 0
bear_run = 0
events: list[TrendEvent] = []
for idx, row in out.iterrows():
classification = row["classification"]
if classification == "real_bull":
bull_run += 1
bear_run = 0
if trend_state == TREND_NEUTRAL and bull_run >= 2:
trend_state = TREND_BULL
events.append(TrendEvent(idx, "Bullish trend started", trend_state))
elif trend_state == TREND_BEAR and bull_run >= 2:
trend_state = TREND_BULL
events.append(TrendEvent(idx, "Bullish reversal confirmed (2 real bullish bars)", trend_state))
elif classification == "real_bear":
bear_run += 1
bull_run = 0
if trend_state == TREND_NEUTRAL and bear_run >= 2:
trend_state = TREND_BEAR
events.append(TrendEvent(idx, "Bearish trend started", trend_state))
elif trend_state == TREND_BULL and bear_run >= 2:
trend_state = TREND_BEAR
events.append(TrendEvent(idx, "Bearish reversal confirmed (2 real bearish bars)", trend_state))
out.at[idx, "trend_state"] = trend_state
return out, events

6
requirements.txt Normal file
View File

@ -0,0 +1,6 @@
streamlit>=1.39.0
yfinance>=0.2.54
pandas>=2.2.3
plotly>=5.24.1
streamlit-autorefresh>=1.0.1
kaleido>=0.2.1

32
run.sh Executable file
View File

@ -0,0 +1,32 @@
#!/usr/bin/env bash
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
VENV_DIR="$ROOT_DIR/.venv"
SETUP_ONLY=false
if [[ "${1:-}" == "--setup-only" ]]; then
SETUP_ONLY=true
fi
if [[ ! -d "$VENV_DIR" ]]; then
echo "Creating virtual environment..."
python3 -m venv "$VENV_DIR"
fi
# shellcheck disable=SC1091
source "$VENV_DIR/bin/activate"
if [[ ! -f "$VENV_DIR/.deps_installed" ]] || [[ "$ROOT_DIR/requirements.txt" -nt "$VENV_DIR/.deps_installed" ]]; then
echo "Installing dependencies..."
pip install -r "$ROOT_DIR/requirements.txt"
touch "$VENV_DIR/.deps_installed"
fi
if [[ "$SETUP_ONLY" == "true" ]]; then
echo "Setup complete."
exit 0
fi
echo "Starting Streamlit app..."
exec streamlit run "$ROOT_DIR/app.py"

View File

@ -0,0 +1,50 @@
#!/usr/bin/env bash
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
PYTHON_BIN="$ROOT_DIR/.venv/bin/python"
LAUNCHER="$ROOT_DIR/backend_embedded_launcher.py"
APP_NAME="ManeshTraderBackend"
BUILD_ROOT="$ROOT_DIR/dist-backend-build"
DIST_PATH="$BUILD_ROOT/dist"
WORK_PATH="$BUILD_ROOT/build"
SPEC_PATH="$BUILD_ROOT/spec"
TARGET_DIR="$ROOT_DIR/ManeshTraderMac/ManeshTraderMac/EmbeddedBackend"
if [[ ! -x "$PYTHON_BIN" ]]; then
echo "Missing virtual environment. Run ./run.sh --setup-only first." >&2
exit 1
fi
if [[ ! -f "$LAUNCHER" ]]; then
echo "Missing launcher file: $LAUNCHER" >&2
exit 1
fi
mkdir -p "$DIST_PATH" "$WORK_PATH" "$SPEC_PATH" "$TARGET_DIR"
"$PYTHON_BIN" -m pip install -q pyinstaller
"$PYTHON_BIN" -m PyInstaller \
--noconfirm \
--clean \
--onefile \
--name "$APP_NAME" \
--distpath "$DIST_PATH" \
--workpath "$WORK_PATH" \
--specpath "$SPEC_PATH" \
--add-data "$ROOT_DIR/app.py:." \
--add-data "$ROOT_DIR/manesh_trader:manesh_trader" \
--add-data "$ROOT_DIR/ONBOARDING.md:." \
--collect-all streamlit \
--collect-all streamlit_autorefresh \
--hidden-import yfinance \
--hidden-import pandas \
--collect-all plotly \
--collect-all kaleido \
"$LAUNCHER"
cp "$DIST_PATH/$APP_NAME" "$TARGET_DIR/$APP_NAME"
chmod +x "$TARGET_DIR/$APP_NAME"
echo "Embedded backend updated: $TARGET_DIR/$APP_NAME"

View File

@ -0,0 +1,32 @@
#!/usr/bin/env bash
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
PROJECT_PATH="$ROOT_DIR/ManeshTraderMac/ManeshTraderMac.xcodeproj"
SCHEME="ManeshTraderMac"
CONFIGURATION="${CONFIGURATION:-Release}"
DERIVED_DATA_PATH="$ROOT_DIR/dist-mac/derived-data"
TIMESTAMP="$(date +%Y%m%d-%H%M%S)"
OUTPUT_DIR="$ROOT_DIR/dist-mac/$TIMESTAMP"
"$ROOT_DIR/scripts/build_embedded_backend.sh"
xcodebuild \
-project "$PROJECT_PATH" \
-scheme "$SCHEME" \
-configuration "$CONFIGURATION" \
-derivedDataPath "$DERIVED_DATA_PATH" \
build
APP_PATH="$(find "$DERIVED_DATA_PATH/Build/Products/$CONFIGURATION" -maxdepth 2 -name "${SCHEME}.app" | head -n 1)"
if [[ -z "${APP_PATH:-}" ]]; then
echo "Build failed: ${SCHEME}.app not found in build products." >&2
exit 1
fi
mkdir -p "$OUTPUT_DIR"
cp -R "$APP_PATH" "$OUTPUT_DIR/"
echo "Self-contained app created: $OUTPUT_DIR/${SCHEME}.app"
echo "To package DMG:"
echo "APP_BUNDLE_PATH=\"$OUTPUT_DIR/${SCHEME}.app\" ./scripts/create_installer_dmg.sh"

67
scripts/build_standalone_app.sh Executable file
View File

@ -0,0 +1,67 @@
#!/usr/bin/env bash
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
PYTHON_BIN="$ROOT_DIR/.venv/bin/python"
APP_NAME="ManeshTrader"
ICON_PATH="$ROOT_DIR/assets/icon/${APP_NAME}.icns"
if [[ ! -x "$PYTHON_BIN" ]]; then
echo "Missing virtual environment. Run ./run.sh --setup-only first." >&2
exit 1
fi
if [[ ! -f "$ICON_PATH" ]]; then
"$PYTHON_BIN" "$ROOT_DIR/scripts/generate_app_icon.py"
ICONSET_DIR="$ROOT_DIR/assets/icon/${APP_NAME}.iconset"
mkdir -p "$ICONSET_DIR"
sips -z 16 16 "$ROOT_DIR/assets/icon/${APP_NAME}.png" --out "$ICONSET_DIR/icon_16x16.png" >/dev/null
sips -z 32 32 "$ROOT_DIR/assets/icon/${APP_NAME}.png" --out "$ICONSET_DIR/icon_16x16@2x.png" >/dev/null
sips -z 32 32 "$ROOT_DIR/assets/icon/${APP_NAME}.png" --out "$ICONSET_DIR/icon_32x32.png" >/dev/null
sips -z 64 64 "$ROOT_DIR/assets/icon/${APP_NAME}.png" --out "$ICONSET_DIR/icon_32x32@2x.png" >/dev/null
sips -z 128 128 "$ROOT_DIR/assets/icon/${APP_NAME}.png" --out "$ICONSET_DIR/icon_128x128.png" >/dev/null
sips -z 256 256 "$ROOT_DIR/assets/icon/${APP_NAME}.png" --out "$ICONSET_DIR/icon_128x128@2x.png" >/dev/null
sips -z 256 256 "$ROOT_DIR/assets/icon/${APP_NAME}.png" --out "$ICONSET_DIR/icon_256x256.png" >/dev/null
sips -z 512 512 "$ROOT_DIR/assets/icon/${APP_NAME}.png" --out "$ICONSET_DIR/icon_256x256@2x.png" >/dev/null
sips -z 512 512 "$ROOT_DIR/assets/icon/${APP_NAME}.png" --out "$ICONSET_DIR/icon_512x512.png" >/dev/null
cp "$ROOT_DIR/assets/icon/${APP_NAME}.png" "$ICONSET_DIR/icon_512x512@2x.png"
iconutil -c icns "$ICONSET_DIR" -o "$ICON_PATH"
fi
"$PYTHON_BIN" -m pip install -q pyinstaller
TS="$(date +%Y%m%d-%H%M%S)"
OUT_ROOT="$ROOT_DIR/dist-standalone/$TS"
DIST_PATH="$OUT_ROOT/dist"
WORK_PATH="$OUT_ROOT/build"
SPEC_PATH="$OUT_ROOT/spec"
mkdir -p "$DIST_PATH" "$WORK_PATH" "$SPEC_PATH"
"$PYTHON_BIN" -m PyInstaller \
--noconfirm \
--windowed \
--name "$APP_NAME" \
--icon "$ICON_PATH" \
--distpath "$DIST_PATH" \
--workpath "$WORK_PATH" \
--specpath "$SPEC_PATH" \
--add-data "$ROOT_DIR/app.py:." \
--add-data "$ROOT_DIR/manesh_trader:manesh_trader" \
--add-data "$ROOT_DIR/ONBOARDING.md:." \
--collect-all streamlit \
--collect-all streamlit_autorefresh \
--hidden-import yfinance \
--hidden-import pandas \
--collect-all plotly \
"$ROOT_DIR/desktop_launcher.py"
APP_BUNDLE="$DIST_PATH/${APP_NAME}.app"
if [[ ! -d "$APP_BUNDLE" ]]; then
echo "Build failed: ${APP_BUNDLE} not found" >&2
exit 1
fi
echo "Standalone app created: $APP_BUNDLE"
echo "To build DMG from this app:"
echo "APP_BUNDLE_PATH=\"$APP_BUNDLE\" ./scripts/create_installer_dmg.sh"

43
scripts/create_installer_dmg.sh Executable file
View File

@ -0,0 +1,43 @@
#!/usr/bin/env bash
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
if [[ -d "$ROOT_DIR/ManeshTraderMac.app" ]]; then
APP_BUNDLE_DEFAULT="$ROOT_DIR/ManeshTraderMac.app"
else
APP_BUNDLE_DEFAULT="$ROOT_DIR/ManeshTrader.app"
fi
APP_BUNDLE="${APP_BUNDLE_PATH:-$APP_BUNDLE_DEFAULT}"
if ! command -v create-dmg >/dev/null 2>&1; then
echo "create-dmg not found. Install with: brew install create-dmg" >&2
exit 1
fi
if [[ ! -d "$APP_BUNDLE" ]]; then
echo "App bundle not found: $APP_BUNDLE" >&2
echo "Set APP_BUNDLE_PATH to a built .app bundle or build one first." >&2
exit 1
fi
APP_FILENAME="$(basename "$APP_BUNDLE")"
APP_NAME="${APP_FILENAME%.app}"
TS="$(date +%Y%m%d-%H%M%S)"
STAGE_DIR="$ROOT_DIR/dist-$TS"
OUT_DMG="$ROOT_DIR/${APP_NAME}-$TS.dmg"
mkdir -p "$STAGE_DIR"
cp -R "$APP_BUNDLE" "$STAGE_DIR/"
create-dmg \
--volname "${APP_NAME} Installer" \
--window-size 600 400 \
--icon-size 120 \
--icon "$APP_FILENAME" 175 190 \
--icon "Applications" 425 190 \
--hide-extension "$APP_FILENAME" \
--app-drop-link 425 190 \
"$OUT_DMG" \
"$STAGE_DIR"
echo "Created installer: $OUT_DMG"

35
scripts/create_mac_app.sh Executable file
View File

@ -0,0 +1,35 @@
#!/usr/bin/env bash
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
APP_NAME="ManeshTrader"
APP_PATH="$ROOT_DIR/${APP_NAME}.app"
ICON_PATH="$ROOT_DIR/assets/icon/${APP_NAME}.icns"
if ! command -v osacompile >/dev/null 2>&1; then
echo "Error: osacompile is not available on this macOS installation." >&2
exit 1
fi
ESCAPED_ROOT="${ROOT_DIR//\"/\\\"}"
SCRIPT_FILE="$(mktemp)"
cat > "$SCRIPT_FILE" <<EOF
on run
tell application "Terminal"
activate
do script "cd \"$ESCAPED_ROOT\" && ./run.sh"
end tell
end run
EOF
rm -rf "$APP_PATH"
osacompile -o "$APP_PATH" "$SCRIPT_FILE"
rm -f "$SCRIPT_FILE"
if [[ -f "$ICON_PATH" ]]; then
cp "$ICON_PATH" "$APP_PATH/Contents/Resources/applet.icns"
fi
echo "Created: $APP_PATH"
echo "You can drag ${APP_NAME}.app into /Applications if desired."

View File

@ -0,0 +1,129 @@
from __future__ import annotations
from pathlib import Path
from PIL import Image, ImageDraw, ImageFilter
def lerp(a: float, b: float, t: float) -> float:
return a + (b - a) * t
def make_gradient(size: int) -> Image.Image:
img = Image.new("RGBA", (size, size), (0, 0, 0, 255))
px = img.load()
top = (8, 17, 40)
bottom = (17, 54, 95)
for y in range(size):
t = y / (size - 1)
color = tuple(int(lerp(top[i], bottom[i], t)) for i in range(3)) + (255,)
for x in range(size):
px[x, y] = color
return img
def rounded_rect_mask(size: int, radius: int) -> Image.Image:
m = Image.new("L", (size, size), 0)
d = ImageDraw.Draw(m)
d.rounded_rectangle((0, 0, size - 1, size - 1), radius=radius, fill=255)
return m
def draw_icon(size: int = 1024) -> Image.Image:
base = make_gradient(size)
draw = ImageDraw.Draw(base)
# Soft vignette
vignette = Image.new("RGBA", (size, size), (0, 0, 0, 0))
vd = ImageDraw.Draw(vignette)
vd.ellipse((-size * 0.25, -size * 0.15, size * 1.25, size * 1.15), fill=(255, 255, 255, 26))
vd.ellipse((-size * 0.1, size * 0.55, size * 1.1, size * 1.5), fill=(0, 0, 0, 70))
vignette = vignette.filter(ImageFilter.GaussianBlur(radius=size * 0.06))
base = Image.alpha_composite(base, vignette)
draw = ImageDraw.Draw(base)
# Grid lines
grid_color = (190, 215, 255, 34)
margin = int(size * 0.16)
for i in range(1, 5):
y = int(lerp(margin, size - margin, i / 5))
draw.line((margin, y, size - margin, y), fill=grid_color, width=max(1, size // 512))
# Candlestick bodies/wicks
center_x = size // 2
widths = int(size * 0.09)
gap = int(size * 0.05)
candles = [
(center_x - widths - gap, 0.62, 0.33, 0.57, 0.36, (18, 214, 130, 255)),
(center_x, 0.72, 0.40, 0.45, 0.65, (255, 88, 88, 255)),
(center_x + widths + gap, 0.58, 0.30, 0.52, 0.34, (18, 214, 130, 255)),
]
for x, low, high, body_top, body_bottom, color in candles:
x = int(x)
y_low = int(size * low)
y_high = int(size * high)
y_a = int(size * body_top)
y_b = int(size * body_bottom)
y_top = min(y_a, y_b)
y_bottom = max(y_a, y_b)
wick_w = max(3, size // 180)
draw.line((x, y_high, x, y_low), fill=(220, 235, 255, 220), width=wick_w)
bw = widths
draw.rounded_rectangle(
(x - bw // 2, y_top, x + bw // 2, y_bottom),
radius=max(6, size // 64),
fill=color,
)
# Trend arrows
arrow_green = (45, 237, 147, 255)
arrow_red = (255, 77, 77, 255)
up = [
(int(size * 0.20), int(size * 0.70)),
(int(size * 0.29), int(size * 0.61)),
(int(size * 0.25), int(size * 0.61)),
(int(size * 0.25), int(size * 0.52)),
(int(size * 0.15), int(size * 0.52)),
(int(size * 0.15), int(size * 0.61)),
(int(size * 0.11), int(size * 0.61)),
]
down = [
(int(size * 0.80), int(size * 0.34)),
(int(size * 0.71), int(size * 0.43)),
(int(size * 0.75), int(size * 0.43)),
(int(size * 0.75), int(size * 0.52)),
(int(size * 0.85), int(size * 0.52)),
(int(size * 0.85), int(size * 0.43)),
(int(size * 0.89), int(size * 0.43)),
]
draw.polygon(up, fill=arrow_green)
draw.polygon(down, fill=arrow_red)
# Rounded-square icon mask
mask = rounded_rect_mask(size, radius=int(size * 0.23))
out = Image.new("RGBA", (size, size), (0, 0, 0, 0))
out.paste(base, (0, 0), mask)
# Subtle border
bd = ImageDraw.Draw(out)
bd.rounded_rectangle(
(2, 2, size - 3, size - 3),
radius=int(size * 0.23),
outline=(255, 255, 255, 44),
width=max(2, size // 256),
)
return out
def main() -> None:
out_path = Path("assets/icon/ManeshTrader.png")
out_path.parent.mkdir(parents=True, exist_ok=True)
img = draw_icon(1024)
img.save(out_path)
print(f"Wrote {out_path}")
if __name__ == "__main__":
main()

8
tests/conftest.py Normal file
View File

@ -0,0 +1,8 @@
from __future__ import annotations
import sys
from pathlib import Path
ROOT = Path(__file__).resolve().parents[1]
if str(ROOT) not in sys.path:
sys.path.insert(0, str(ROOT))

26
tests/test_exporting.py Normal file
View File

@ -0,0 +1,26 @@
from __future__ import annotations
import pandas as pd
from manesh_trader.exporting import df_for_export
def test_df_for_export_handles_named_datetime_index() -> None:
idx = pd.date_range("2025-01-01", periods=2, freq="D", name="Date")
df = pd.DataFrame({"Close": [100.0, 101.0]}, index=idx)
out = df_for_export(df)
assert "timestamp" in out.columns
assert "Date" not in out.columns
assert isinstance(out.loc[0, "timestamp"], str)
def test_df_for_export_handles_unnamed_index() -> None:
idx = pd.date_range("2025-01-01", periods=2, freq="D")
df = pd.DataFrame({"Close": [100.0, 101.0]}, index=idx)
out = df_for_export(df)
assert "timestamp" in out.columns
assert isinstance(out.loc[0, "timestamp"], str)

133
tests/test_strategy.py Normal file
View File

@ -0,0 +1,133 @@
from __future__ import annotations
import pandas as pd
from manesh_trader.constants import TREND_BEAR, TREND_BULL, TREND_NEUTRAL
from manesh_trader.strategy import classify_bars, detect_trends
def make_df(rows: list[dict[str, float]]) -> pd.DataFrame:
idx = pd.date_range("2025-01-01", periods=len(rows), freq="D")
return pd.DataFrame(rows, index=idx)
def test_classify_bars_real_bull_real_bear_and_fake() -> None:
df = make_df(
[
{"Open": 100, "High": 110, "Low": 95, "Close": 105, "Volume": 1000},
{"Open": 106, "High": 111, "Low": 102, "Close": 112, "Volume": 1000},
{"Open": 111, "High": 112, "Low": 100, "Close": 99, "Volume": 1000},
{"Open": 100, "High": 103, "Low": 97, "Close": 101, "Volume": 1000},
]
)
out = classify_bars(
df,
use_body_range=False,
volume_filter_enabled=False,
volume_sma_window=20,
volume_multiplier=1.0,
)
assert list(out["classification"]) == ["unclassified", "real_bull", "real_bear", "fake"]
def test_classify_bars_with_body_range_ignores_wicks() -> None:
df = make_df(
[
{"Open": 100, "High": 130, "Low": 90, "Close": 105, "Volume": 1000},
{"Open": 106, "High": 109, "Low": 104, "Close": 108, "Volume": 1000},
]
)
wick_based = classify_bars(
df,
use_body_range=False,
volume_filter_enabled=False,
volume_sma_window=20,
volume_multiplier=1.0,
)
body_based = classify_bars(
df,
use_body_range=True,
volume_filter_enabled=False,
volume_sma_window=20,
volume_multiplier=1.0,
)
assert wick_based.iloc[1]["classification"] == "fake"
assert body_based.iloc[1]["classification"] == "real_bull"
def test_detect_trends_fake_bars_do_not_break_real_bar_sequence() -> None:
df = make_df(
[
{"Open": 100, "High": 110, "Low": 95, "Close": 105, "Volume": 1000},
{"Open": 106, "High": 112, "Low": 104, "Close": 113, "Volume": 1000},
{"Open": 113, "High": 115, "Low": 108, "Close": 112, "Volume": 1000},
{"Open": 112, "High": 120, "Low": 111, "Close": 121, "Volume": 1000},
]
)
classified = classify_bars(
df,
use_body_range=False,
volume_filter_enabled=False,
volume_sma_window=20,
volume_multiplier=1.0,
)
trended, events = detect_trends(classified)
assert list(classified["classification"]) == ["unclassified", "real_bull", "fake", "real_bull"]
assert trended.iloc[-1]["trend_state"] == TREND_BULL
assert any(e.event == "Bullish trend started" for e in events)
def test_detect_trends_reversal_requires_two_opposite_real_bars() -> None:
df = make_df(
[
{"Open": 100, "High": 110, "Low": 95, "Close": 105, "Volume": 1000},
{"Open": 106, "High": 112, "Low": 104, "Close": 113, "Volume": 1000},
{"Open": 113, "High": 120, "Low": 111, "Close": 121, "Volume": 1000},
{"Open": 120, "High": 121, "Low": 109, "Close": 108, "Volume": 1000},
{"Open": 108, "High": 109, "Low": 95, "Close": 94, "Volume": 1000},
]
)
classified = classify_bars(
df,
use_body_range=False,
volume_filter_enabled=False,
volume_sma_window=20,
volume_multiplier=1.0,
)
trended, events = detect_trends(classified)
assert trended.iloc[2]["trend_state"] == TREND_BULL
assert trended.iloc[3]["trend_state"] == TREND_BULL
assert trended.iloc[4]["trend_state"] == TREND_BEAR
assert any("Bearish reversal confirmed" in e.event for e in events)
def test_detect_trends_remains_neutral_without_two_consecutive_real_bars() -> None:
df = make_df(
[
{"Open": 100, "High": 110, "Low": 95, "Close": 105, "Volume": 1000},
{"Open": 105, "High": 108, "Low": 102, "Close": 106, "Volume": 1000},
{"Open": 106, "High": 116, "Low": 100, "Close": 117, "Volume": 1000},
{"Open": 117, "High": 118, "Low": 111, "Close": 115, "Volume": 1000},
{"Open": 115, "High": 116, "Low": 104, "Close": 103, "Volume": 1000},
]
)
classified = classify_bars(
df,
use_body_range=False,
volume_filter_enabled=False,
volume_sma_window=20,
volume_multiplier=1.0,
)
trended, events = detect_trends(classified)
assert trended.iloc[-1]["trend_state"] == TREND_NEUTRAL
assert len(events) == 0