Compare commits

...

222 Commits
0.9 ... master

Author SHA1 Message Date
3024816525 hotfix for issue with socktop agent not creating ssl certificate on first launch after upgrade of axum server version.
Some checks failed
CI / build (ubuntu-latest) (push) Has been cancelled
CI / build (windows-latest) (push) Has been cancelled
2025-11-21 00:21:05 -08:00
1d7bc42d59 fix unit test, move to macro cargo_bin! 2025-11-21 00:07:44 -08:00
518ae8c2bf update axum server version
Some checks failed
CI / build (ubuntu-latest) (push) Has been cancelled
CI / build (windows-latest) (push) Has been cancelled
2025-11-17 15:09:53 -08:00
6eb1809309 set connector back to crate version 2025-11-17 14:15:39 -08:00
1c01902a71 update cargo version number 2025-11-17 14:13:48 -08:00
9d302ad475 patch header for small monitors and increase cargo version in advance of publish. 2025-11-17 11:52:22 -08:00
7875f132f7 Make help modal scrollable for small resolutions
- Add Up/Down arrow key handling in help modal
- Display scrollbar when content exceeds viewport
- Update title to indicate scrollability
- Fixes content cutoff on small terminal windows
2025-11-17 11:29:23 -08:00
0d789fb97c
Add TUI improvements: CPU averaging, max memory tracking, and fuzzy process search (#23)
This commit implements several major improvements to the TUI experience:

1. CPU Average Display in Main Window
   - Show average CPU usage over monitoring period alongside current value
   - Format: "CPU avg (now: 45.2% | avg: 52.3%)"
   - Helps identify sustained vs momentary CPU spikes

2. Max Memory Tracking in Process Details Modal
   - Track and display peak memory usage since monitoring started
   - Shown as "Max Memory: 67.8 MB" in yellow for emphasis
   - Helps identify memory leaks and usage patterns
   - Resets when switching to different process

3. Fuzzy Process Search
   - Press / to activate search mode with bordered search box
   - Type to fuzzy-match process names (case-insensitive)
   - Press Enter to auto-select first result
   - Navigate results with arrow keys while typing
   - Press c to clear filter
   - Press / again to edit existing search

   Search box features:
   - Yellow bordered box for high visibility
   - Active mode: "Search: query_"
   - Filter mode: "Filter: query (press / to edit, c to clear)"

   Technical implementation:
   - Centralized filtering with get_filtered_sorted_indices()
   - Consistent filtering across display, navigation, mouse, and auto-scroll
   - Proper content area offset calculation for search box
   - Real-time filtering as user types

4. Code Quality Improvements
   - Created ProcessDisplayParams and ProcessKeyParams structs
   - Created MemoryIoParams struct for process modal rendering
   - Reduced function arguments to stay under clippy limits
   - Exported get_filtered_sorted_indices for reuse

Files Modified:
- socktop/src/app.rs: Search state, auto-scroll with filtering, max memory tracking
- socktop/src/ui/cpu.rs: CPU average calculation and display
- socktop/src/ui/processes.rs: Fuzzy search, filtering, parameter structs
- socktop/src/ui/modal.rs: Updated help modal with new shortcuts
- socktop/src/ui/modal_process.rs: Max memory display, MemoryIoParams struct
- socktop/src/ui/modal_types.rs: Added max_mem_bytes field

Testing:
- All tests pass
- No clippy warnings
- Cargo fmt applied
- Tested search, navigation, mouse clicks, and auto-scroll
- Verified on both filtered and unfiltered process lists

Breaking Changes:
- None (all changes are additive features)

Closes: (performance monitoring improvements)
2025-11-17 11:24:32 -08:00
5ddaed298b
Optimize socktop_agent for reduced binary size and memory footprint (#22)
This commit implements several optimizations to make socktop_agent
significantly more lightweight without sacrificing functionality.

Changes:

1. Reduced Tokio Runtime Thread Pool (main.rs)
   - Changed from default (num_cpus) to 2 worker threads
   - Configurable via SOCKTOP_WORKER_THREADS environment variable
   - Rationale: Agent is I/O-bound, not CPU-intensive
   - Memory savings: ~6-12 MB on typical 8-core systems

2. Minimal Tokio Features (Cargo.toml)
   - Changed from features = ["full"] to minimal set:
     ["rt-multi-thread", "net", "sync", "macros"]
   - Removed unused features: io, fs, process, signal, time
   - Binary size reduction: ~200-300 KB
   - Faster compile times

3. Optional Tracing (Cargo.toml, main.rs, metrics.rs)
   - Made tracing dependencies optional with "logging" feature flag
   - Disabled by default for production builds
   - Binary size reduction: 1.5 MB (27%!)
   - Enable with: cargo build --features logging

4. Cleanup (Cargo.toml)
   - Removed unused tokio-process dependency

Results:
- Binary size: 5.6 MB → 4.0 MB (28% reduction)
- Memory usage: 25-40 MB → 15-25 MB (30-40% reduction)
- Worker threads: 8+ → 2 (75% reduction on 8-core systems)

Testing:
- All tests pass with and without logging feature
- No clippy warnings
- Functionality unchanged
- Production-ready

Breaking Changes:
- None (all changes are backward compatible)
- Default behavior is now more lightweight
- Logging can be re-enabled with --features logging

To build with logging for debugging:
  cargo build --package socktop_agent --release --features logging
2025-11-17 09:51:41 -08:00
1528568c30
Merge pull request #21 from jasonwitty/feature/about-modal
Feature/about modal
2025-11-17 00:18:55 -08:00
6f238cdf25 tweak hotkeys, add info panel, optimize fonts and hotkeys for about and info panel.
Some checks failed
CI / build (ubuntu-latest) (push) Has been cancelled
CI / build (windows-latest) (push) Has been cancelled
2025-11-17 00:05:02 -08:00
ffe451edaa cargo fmt 2025-10-26 02:32:11 -07:00
c9bde52cb1 move logo to them file. 2025-10-26 02:30:46 -07:00
0603746d7c cargo fmt 2025-10-26 02:18:01 -07:00
25632f3427 Add About modal with sock ASCII art 2025-10-26 02:16:42 -07:00
e51cdb0c50 display tweaks
Some checks failed
CI / build (ubuntu-latest) (push) Has been cancelled
CI / build (windows-latest) (push) Has been cancelled
make it more pretty
2025-10-06 12:05:12 -07:00
1cb05d404b fix: add backward compatibility for DiskInfo fields 2025-10-06 11:43:58 -07:00
4196066e57 fix: NVMe temperature detection - contains() check and /dev/ prefix 2025-10-06 11:40:49 -07:00
47e96c7d92 fix: refresh component values to collect NVMe temperatures 2025-10-06 11:15:36 -07:00
bae2ecb79a fix: lookup temperature for parent disk, not partition 2025-10-06 11:06:30 -07:00
bd0d15a1ae fix: correct disk size aggregation and nvme temperature detection 2025-10-06 10:52:44 -07:00
689498c5f4 fix: show parent disks with aggregated partition stats 2025-10-06 10:46:51 -07:00
34e260a612 feat: disk section enhancements - temperature, partition indentation, duplicate filtering 2025-10-06 10:30:55 -07:00
47eff3a75c remove unused import. / clippy cleanup
Some checks failed
CI / build (ubuntu-latest) (push) Has been cancelled
CI / build (windows-latest) (push) Has been cancelled
2025-10-06 10:01:40 -07:00
0210b49219 cargo fmt 2025-10-06 09:52:36 -07:00
70a150152c fix for windows build error 2025-10-06 09:51:11 -07:00
f4b54db399 fix for windows build error. 2025-10-06 09:50:38 -07:00
e857cfc665 add processes window cleanup
- refactor code
- add unit test
- fix warnings.
2025-10-05 00:07:27 -07:00
e66008f341 initial check for process summary screen
This check in offers alpha support for per process metrics, you can view threads, process CPU usage over time, IO, memory, CPU time, parent process, command, uptime and journal entries. This is unfinished but all major functionality is available and I wanted to make it available to feedback and testing.
2025-10-02 16:54:27 -07:00
a238ce320b
Merge pull request #15 from jasonwitty/feature/connection-error-modal
feature - add error modal support and retry
2025-09-15 10:34:50 -07:00
b635f5d7f4 feature - add error modal support and retry
Some checks failed
CI / build (ubuntu-latest) (push) Has been cancelled
CI / build (windows-latest) (push) Has been cancelled
2025-09-15 10:16:47 -07:00
18b41c1b45
Merge pull request #14 from jasonwitty/feature/extract-socktop-connector
Refactor for additional socktop connector library
2025-09-10 15:01:17 -07:00
b4ed036357 bump version
Some checks failed
CI / build (ubuntu-latest) (push) Has been cancelled
CI / build (windows-latest) (push) Has been cancelled
2025-09-10 13:05:50 -07:00
ec0e409488 fix clippy warnings. 2025-09-10 11:43:12 -07:00
08f248c696 Housekeeping and QOL
non functional update:

- refactor stream of consciousness into separate files.
- combine equivelent functions used in networking and wasm features.
- cleanups and version bumps.
2025-09-10 10:39:21 -07:00
cea133b7da show actual metrics 2025-09-10 09:32:00 -07:00
e679896ca0 Add AI generated zliij plugin scafold 2025-09-09 15:51:44 -07:00
5e5fde190a add screenshot 2025-09-09 13:49:06 -07:00
8286d21a2a Merge branch 'feature/extract-socktop-connector' of https://github.com/jasonwitty/socktop into feature/extract-socktop-connector 2025-09-09 13:43:48 -07:00
b91fc7b016 allow user to easily override location with text entry. 2025-09-09 13:43:45 -07:00
f936767835
Update README.md 2025-09-09 02:57:32 -07:00
5f2777cdb2
Update README.md 2025-09-09 02:53:50 -07:00
49164da105 docs: Complete WASM documentation update - reflect full networking capabilities 2025-09-09 02:42:12 -07:00
22c1f80e70 docs(connector): update README to reflect full WASM support 2025-09-09 02:38:35 -07:00
a486225008 fix: formatting from cargo fmt 2025-09-09 02:32:20 -07:00
d97f7507e8 feat(connector): implement gzipped protobuf support for WASM and fix all warnings 2025-09-09 02:30:16 -07:00
e4186a7ec0 WASM compatibilty 2025-09-08 12:29:03 -07:00
f59c28d966 WASM compatibility update
Related to: Usage as a lib #8

1.  feature gating of TLS and other features not supported with WASM.
2. updated documentation.
3. creation of AI slop WASM example for verification.
2025-09-08 12:28:44 -07:00
06cd6d0c82 Reference: Usage as a lib #8
- Implement protocol versioning
- migrate to thisError
- general error handling improvements in socktop_connector lib
- improve documentation
- increment  version
2025-09-07 18:55:23 -07:00
ffc246b705 Add WASM compatibility documentation and minimal tokio features 2025-09-04 14:49:39 -07:00
cd2816915d cargo fmt 2025-09-04 06:19:16 -07:00
7cd5941434 Add continuous monitoring examples to documentation 2025-09-04 06:17:23 -07:00
76c7fe1d6f Fix CI: Update test path for WebSocket integration test 2025-09-04 06:11:59 -07:00
eed04f1d5c Fix remaining clippy warnings in socktop_agent 2025-09-04 06:04:57 -07:00
764c25846f Fix clippy warnings: collapse nested if statements using let-else patterns 2025-09-04 05:58:17 -07:00
a9bf4208ab cargo fmt 2025-09-04 05:53:59 -07:00
9c1416eabf Fix build script to use vendored protoc binary for CI 2025-09-04 05:52:57 -07:00
e7350f8908 Update Cargo.lock with protoc-bin-vendored dependency 2025-09-04 05:50:17 -07:00
2647b611d2 Fix build script to use protoc-bin-vendored for CI compatibility 2025-09-04 05:48:02 -07:00
a359f17367 fix for failed CI build 2025-09-04 05:45:13 -07:00
d93b7aca5a remove invalid slug 2025-09-04 05:41:59 -07:00
e51054811c Refactor for additional socktop connector library
- socktop connector allows you to communicate with socktop agent directly from you code without needing to implement the agent API directly.
- will also be used for non tui implementation of "socktop collector" in the future.
- moved to rust 2024 to take advantage of some new features that helped with refactor.
- fixed everything that exploded with update.
- added rust docs for lib.
2025-09-04 05:30:25 -07:00
b74242e6d9
Merge pull request #13 from jasonwitty/6-accessibility-cross-compile-guide
add license file
2025-09-03 23:11:59 -07:00
4e378b882a add license file
Some checks failed
CI / build (ubuntu-latest) (push) Has been cancelled
CI / build (windows-latest) (push) Has been cancelled
2025-09-03 23:10:48 -07:00
622767a605
Merge pull request #7 from jasonwitty/6-accessibility-cross-compile-guide
re: Accessibility: Cross-compile guide
2025-08-30 02:00:13 -07:00
0c5a1d7553 update readme 2025-08-30 01:59:25 -07:00
0bd709d2a7 slipsteam note for rpi users on kernel version 2025-08-29 11:28:59 -07:00
31f5f9ce76 re: Accessibility: Cross-compile guide 2025-08-29 11:23:41 -07:00
df2308e6e9 code optimizations to reduce cpu usage of agent on all platforms and additional unit test. 2025-08-28 16:03:05 -07:00
7592709a43 clamp then divide by cores for more accurate statistics 2025-08-28 13:11:48 -07:00
61fe1cc38e socktop_agent: bump version to 1.40.65 2025-08-28 12:27:48 -07:00
eed346abb6 scripts: add publish_socktop_agent.sh job 2025-08-28 12:07:03 -07:00
ab3bb33711 socktop_agent: bump version to 1.40.64 2025-08-28 12:03:45 -07:00
7caf2f4bfb remove ununused var 2025-08-27 17:15:35 -07:00
b249c7ba99 Update metrics.rs 2025-08-27 16:55:12 -07:00
f0858525e8 fix for macos defect where processes less than .01% were being filtered 2025-08-27 16:55:09 -07:00
2fe005ed90 ProcessesToUpdate::All enum 2025-08-27 16:20:55 -07:00
ca6a5cbdfa use ProcessesToUpdate::All ENUM 2025-08-27 16:20:41 -07:00
56301d61fd fixes for non linux compilation issues. 2025-08-27 16:11:17 -07:00
55e5c708fe MACOS / NON LINUX metrics optimizations. 2025-08-27 16:00:29 -07:00
2d17cf1598 additional optimizations for macos 2025-08-27 15:05:38 -07:00
353c08c35e increment version and macos performance 2025-08-26 12:14:32 -07:00
f13ea45360 increment version 2025-08-26 10:57:01 -07:00
8ce00a5dad non linux optimizations for macbook 2025-08-26 10:14:14 -07:00
f37b8d9ff4 chore(agent): fix clippy unused mut on non-linux process list 2025-08-26 00:22:29 -07:00
322981ada7 cargo fmt and version bump 2025-08-26 00:20:52 -07:00
3394beab67 chore: make pre-commit resilient when cargo absent 2025-08-26 00:18:10 -07:00
c9ebea92f5 perf(agent,non-linux): enable CPU core normalization by default; tighter scaling threshold 2025-08-25 23:50:26 -07:00
e2dc5e8ac9 perf(agent,non-linux): add per-process CPU scaling heuristic to reduce overreporting 2025-08-25 23:47:31 -07:00
beddba0072 perf(agent,non-linux): reduce process collection overhead; configurable CPU sample delay 2025-08-25 23:44:28 -07:00
cacc4cba9f cargo fmt 2025-08-25 23:24:50 -07:00
66270c16b7 perf(agent): windows/mac cpu collection tweak; add optional normalization; silence linux dead_code 2025-08-25 22:59:07 -07:00
00d5777d05 Merge branch 'master' of https://github.com/jasonwitty/socktop 2025-08-25 22:35:02 -07:00
f62b5274d2 optimize non linux metrics collection 2025-08-25 22:35:01 -07:00
bbbe35111a
Update README.md 2025-08-25 01:33:40 -07:00
a4356b5ece update readme with animated demo 2025-08-24 20:35:56 -07:00
b6e656738b chore(release): bump to 1.40.0
Some checks failed
CI / build (ubuntu-latest) (push) Has been cancelled
CI / build (windows-latest) (push) Has been cancelled
2025-08-24 18:56:40 -07:00
f83cb07d57 Release candidate 1.4
increment version, support version flag on socktop
2025-08-24 18:03:45 -07:00
7697c7dc2b docs: add per-crate README.md and link via Cargo.toml readme field 2025-08-24 17:56:22 -07:00
1043fffc8d
Merge pull request #5 from jasonwitty/feature/housekeeping
Feature/housekeeping
2025-08-24 17:46:26 -07:00
ce59dd9dfe chore(agent): fix clippy needless_return for non-linux process collection
Some checks failed
CI / build (ubuntu-latest) (push) Has been cancelled
CI / build (windows-latest) (push) Has been cancelled
2025-08-24 12:52:56 -07:00
8d48fa4c3b increment version 2025-08-24 12:47:49 -07:00
51e702368e cargo fmt 2025-08-24 12:40:35 -07:00
85f9a44e46 perf(agent): add hostname + TTL caches (metrics/disks/processes) and reuse sys for processes 2025-08-24 12:38:32 -07:00
b2468a5936 refactor(agent): remove unused background sampler infrastructure (request-driven only) 2025-08-24 12:29:23 -07:00
8de5943f34 test(agent): move inline port parsing test to tests/port_parse.rs 2025-08-24 12:15:32 -07:00
e624751f56 chore: align sysinfo to 0.37 across workspace 2025-08-24 12:11:38 -07:00
8bd1af7a27 chore: remove unused deps (thiserror, chrono, futures, nvml-wrapper, tungstenite, bytes, prost-types) 2025-08-24 12:08:52 -07:00
5c32d15156 first run messaging.
display welcome message on first run when a profile file is not created and remote server not specified.
2025-08-24 12:01:17 -07:00
471d547b5d cargo fmt and increment version 2025-08-23 02:18:09 -07:00
d3aff590bc client: fully disable hostname verification by custom ServerCertVerifier unless --verify-hostname used 2025-08-23 02:15:45 -07:00
47910725a8 increment version 2025-08-22 22:48:41 -07:00
a8e3f4ef26 docs: describe --verify-hostname flag and default relaxed SAN behavior 2025-08-22 22:39:18 -07:00
fab1e5a104 client: default skip hostname verification; add --verify-hostname to enable 2025-08-22 22:39:06 -07:00
d0455611d5 cargo fmt 2025-08-22 14:09:02 -07:00
4c45b85c98 fix san and increment version 2025-08-22 14:07:41 -07:00
d9fdc31e8f docs: document SOCKTOP_AGENT_EXTRA_SANS for additional certificate SANs 2025-08-22 13:48:07 -07:00
dc9aa4c026 agent: support extra SANs via SOCKTOP_AGENT_EXTRA_SANS env var 2025-08-22 13:47:51 -07:00
c2e91bd20c docs: document TLS cert expiry and manual regeneration procedure 2025-08-22 12:57:48 -07:00
25229d6b03 cargo fmt 2025-08-22 12:49:12 -07:00
290e2a8fb2 fix for expired certificate 2025-08-22 12:46:01 -07:00
30d263c71e agent: dynamic self-signed cert validity (~397d from now) to avoid immediate expiry 2025-08-22 12:43:48 -07:00
9b177f3206 cargo fmt 2025-08-22 11:55:41 -07:00
8a6ae3fcd7 increment version 2025-08-22 11:53:27 -07:00
5b8ec7efc1 agent: add --version / -V flag 2025-08-22 11:52:51 -07:00
155c420a1a cargo fmt all check ins 2025-08-22 11:10:11 -07:00
d3fa55e572 chore: ignore .vscode and remove from repo 2025-08-22 11:09:33 -07:00
faf2861b29 cargo fmt 2025-08-22 10:53:47 -07:00
59432ab1d3 agent: fix rcgen usage for self-signed cert generation 2025-08-22 10:48:01 -07:00
d1c8a64418 agent: replace openssl self-signed cert generation with rcgen (pure Rust) 2025-08-22 10:46:29 -07:00
8def4b2d06 Publish: include proto in each crate and fix build.rs paths 2025-08-22 10:44:56 -07:00
a42ca71a9f update version prior to cargo publish 2025-08-22 10:41:17 -07:00
9f675fa804
Merge pull request #4 from jasonwitty/feature/connection-profiles
Feature/connection profiles
2025-08-22 09:31:28 -07:00
3ac03c07ba enable GPU polling only when GPU is present 2025-08-22 09:27:05 -07:00
e53d0ab98d Add TLS / Token, polling interval indicators. 2025-08-21 17:38:26 -07:00
2ca51adc61 tui: refine header icons (crossed TLS when disabled, spacing fix) 2025-08-21 17:28:21 -07:00
67ecf36883 feat(tui): header shows TLS/token status and polling intervals 2025-08-21 17:24:41 -07:00
9a35306340 cargo fmt 2025-08-21 16:19:49 -07:00
a4bb6f170a feat(client): configurable metrics/process intervals with profile persistence; docs updated 2025-08-21 16:18:41 -07:00
384953d5d5
Merge pull request #3 from jasonwitty/feature/connection-profiles
Feature/connection profiles
2025-08-21 16:17:49 -07:00
f9114426cc add unit tests for profile creation and update readme 2025-08-21 14:42:15 -07:00
8ee2a03a2c chore(client): clean up demo mode integration and add stop log line 2025-08-21 13:55:02 -07:00
0275b1871d cargo fmt 2025-08-21 13:49:36 -07:00
9491dc50a8 feat(client): demo mode (--demo or select demo) auto-spawns local agent on 3231 2025-08-21 13:47:28 -07:00
e7eb3e6557 cargo fmt 2025-08-21 13:18:36 -07:00
a596acfb72 chore(client): refactor profile overwrite logic to satisfy clippy 2025-08-21 13:17:53 -07:00
b727e54589 feat(client): prompt for URL/CA when specifying a new profile name 2025-08-21 12:56:11 -07:00
2af08c455a fix(client): correct profile overwrite prompt logic (only save on confirm or --save) 2025-08-21 12:48:53 -07:00
d049846564 docs: add connection profiles section to README 2025-08-21 12:41:46 -07:00
97308f9d15 feat(client): connection profiles (--profile/-P, --save) with JSON persistence 2025-08-21 12:39:21 -07:00
4cef273e57
Merge pull request #2 from jasonwitty/feature/protobuf-processes
Feature/protobuf processes
2025-08-21 11:50:27 -07:00
660474a6ce ci cleanup 2025-08-20 20:36:49 -07:00
93dd14967d try/fix windows again ! 2025-08-20 16:32:51 -07:00
923a3872fe add logging to help debug windows problems 2025-08-20 15:47:28 -07:00
5f10e34341 windows try/fix 2025-08-20 15:20:07 -07:00
b80d322650 cargo fmt 2025-08-20 11:29:22 -07:00
fff386f9d5 fixing windows build problems. i hate windows !
Agent:
Added GET /healthz that returns 200 immediately.
File: main.rs (router now includes /healthz).
CI workflow:
Start agent from target/release on both OSes.
Set SOCKTOP_ENABLE_SSL=0 explicitly.
Ubuntu: wait on curl http://127.0.0.1:3000/healthz (60s), log tail and ss/netstat on failure.
Windows: wait on Invoke-WebRequest to /healthz (60s), capture stdout/stderr, print netstat on failure.
File: .github/workflows/ci.yml.
2025-08-20 11:26:09 -07:00
93f4e1feea fix windows build after ssl feature and optimize build 2025-08-20 10:24:24 -07:00
97255b42fb fix windows build 2025-08-20 00:14:21 -07:00
554a2c349f protobuff Process list
BREAKING: Process list over WS is now Protocol Buffers; client required.
Agent: returns all processes (no server-side top-k); large payloads gzip-compressed.
Client: decodes protobuf (gz/raw), moves sorting/pagination to TUI.
Build: add prost/prost-build with vendored protoc; enable thin LTO, panic=abort, strip symbols.
Cleanup: cfg-gate Linux-only code; fix Clippy across platforms; tests updated (ws probe TLS CA).
2025-08-19 23:24:36 -07:00
10501168c5 clippy fixes. 2025-08-19 15:52:30 -07:00
d346c61c28
Merge pull request #1 from jasonwitty/feature/wss-selfsigned
SSL Support
2025-08-19 15:33:50 -07:00
7652095109 cargo fmt 2025-08-19 15:33:11 -07:00
6b58ac67f6
Merge branch 'master' into feature/wss-selfsigned 2025-08-19 15:31:10 -07:00
3ad1d52fe2 fix windows build.
Some checks failed
CI / build (ubuntu-latest) (push) Has been cancelled
CI / build (windows-latest) (push) Has been cancelled
Gate Linux-specific imports and fields to avoid Windows dead-code/unused warnings.
Ensure Linux-only proc CPU tracker is not referenced on non-Linux builds.
2025-08-16 17:42:18 -07:00
2e8cc24e81 remove dead code and increment version 2025-08-16 17:31:32 -07:00
36e73fd9ed cargo fmt 2025-08-16 01:25:03 -07:00
3d14e4a370 SSL Support
Add WSS/TLS (self‑signed) with client cert pinning; auto ws→wss on --tls-ca/-t; add -p/-t flags; harden TLS test; fix clippy; update README.

feat: WSS/TLS support (self‑signed + pinning), auto ws→wss when CA provided, new -p/-t flags; tests + clippy cleanup; docs updated.

Add TLS: self‑signed certs on agent, client pin via --tls-ca/-t (auto‑upgrade to wss), CLI/tests/README updates, clippy fixes.

12 files changed
Cargo.toml
README.md
Cargo.tomlsocktop_agent
main.rssocktop_agent/src
tls.rssocktop_agent/src
cli_args.rssocktop_agent/tests
Add Context...
README.md
2025-08-16 01:23:20 -07:00
c6b8c9c905 patch for macbook compatibility issues. 2025-08-15 19:21:34 -07:00
f980b6ace9
Update README.md 2025-08-13 20:17:36 -07:00
6a27280f8d Merge branch 'master' of https://github.com/jasonwitty/socktop
Some checks failed
CI / build (ubuntu-latest) (push) Has been cancelled
CI / build (windows-latest) (push) Has been cancelled
2025-08-12 17:12:38 -07:00
38b0cdcf0e update screenshot 2025-08-12 17:12:27 -07:00
268627ed63
Update README.md 2025-08-12 17:10:12 -07:00
55a663cf7a
Update README.md 2025-08-12 17:09:31 -07:00
8b76ccb742 Merge branch 'master' of https://github.com/jasonwitty/socktop 2025-08-12 17:07:36 -07:00
d0f6cb0e70 fix screenshot 2025-08-12 17:07:25 -07:00
56ebe6bbab
Rename tmux_4_rpis_v2jpg to tmux_4_rpis_v2.jpg 2025-08-12 17:05:40 -07:00
dc90de7ff1 update readme and add new screenshots 2025-08-12 17:02:01 -07:00
319f47eb73 Update README.md 2025-08-12 16:48:35 -07:00
fd2889ccca Update cargo versions in prep for publish. 2025-08-12 16:12:08 -07:00
0859f50897 multiple feature and performance improvements (see description)
Here are concise release notes you can paste into your GitHub release.

Release notes — 2025-08-12

Highlights

Agent back to near-zero CPU when idle (request-driven, no background samplers).
Accurate per-process CPU% via /proc deltas; only top-level processes (threads hidden).
TUI: processes pane gets scrollbar, click-to-sort (CPU% or Mem) with indicator, stable total count.
Network panes made taller; disks slightly reduced.
README revamped: rustup prereqs, crates.io install, update/systemd instructions.
Clippy cleanups across agent and client.
Agent

Reverted precompressed caches and background samplers; WebSocket path is request-driven again.
Ensured on-demand gzip for larger replies; no per-request overhead when small.
Processes: switched to refresh_processes_specifics with ProcessRefreshKind::everything().without_tasks() to exclude threads.
Per-process CPU% now computed from /proc jiffies deltas using a small ProcCpuTracker (fixes “always 0%”/scaling issues).
Optional metrics and light caching:
CPU temp and GPU metrics gated by env (SOCKTOP_AGENT_TEMP=0, SOCKTOP_AGENT_GPU=0).
Tiny TTL caches via once_cell to avoid rescanning sensors every tick.
Dependencies: added once_cell = "1.19".
No API changes to WS endpoints.
Client (TUI)

Processes pane:
Scrollbar (mouse wheel, drag; keyboard arrows/PageUp/PageDown/Home/End).
Click header to sort by CPU% or Mem; dot indicator on active column.
Preserves process_count across fast metrics updates to avoid flicker.
UI/theme:
Shared scrollbar colors moved to ui/theme.rs; both CPU and Processes reuse them.
Cached pane rect to fix input handling; removed unused vars.
Layout: network download/upload get more vertical space; disks shrink slightly.
Clippy fixes: derive Default for ProcSortBy; style/import cleanups.
Docs

README: added rustup install steps (with proper shell reload), install via cargo install socktop and cargo install socktop_agent, and a clear Updating section (systemd service steps included).
Features list updated; roadmap marks independent cadences as done.
Upgrade notes

Agent: cargo install socktop_agent --force, then restart your systemd service; if unit changed, systemctl daemon-reload.
TUI: cargo install socktop --force.
Optional envs to trim overhead: SOCKTOP_AGENT_GPU=0, SOCKTOP_AGENT_TEMP=0.
No config or API breaking changes.
2025-08-12 15:52:46 -07:00
5c002f0b2b Update README.md 2025-08-12 00:01:21 -07:00
5a824c2098 only run apt command if build box is linux 2025-08-11 23:49:45 -07:00
ffb381e40e install deps libdrm-dev libdrm-amdgpu1 on build box 2025-08-11 23:46:08 -07:00
0a70d7fd39 rustfmt 2025-08-11 23:41:23 -07:00
8d81ee1f7e clippy clean ups 2025-08-11 23:37:50 -07:00
1e248306a6 clippy clean up multiple files 2025-08-11 23:27:18 -07:00
7cd6a6e0a1 gpu clippy cleanup 2025-08-11 23:08:35 -07:00
8f58feffbe clippy clean up app.rs 2025-08-11 22:59:36 -07:00
5790ef753b clippy cleanup 2025-08-11 22:56:35 -07:00
9a49fd6b24 clippy cleanup main 2025-08-11 22:50:15 -07:00
b35e431200 clippy clean up 2025-08-11 22:46:58 -07:00
4b52382326 another clippy cleanup 2025-08-11 22:45:24 -07:00
11506699e3 clippy clean up 2025-08-11 22:44:43 -07:00
4c6c707dd0 struct cleanup 2025-08-11 22:41:12 -07:00
d69a4104fc performance improvements and formatting cleanup 2025-08-11 22:37:46 -07:00
c3f81eef25 clippy code clean up
clippy code clean up
2025-08-11 20:47:21 -07:00
05276f9eea fix clippy issues causing workflow failure. 2025-08-11 14:39:19 -07:00
0105b29bfc fix additional clippy warnings. 2025-08-11 14:34:45 -07:00
0cbba6b290 fix clippy format warnings 2025-08-11 14:30:41 -07:00
250f7bf93a remove unused vendor field. 2025-08-11 14:25:58 -07:00
4efeb3b60f update screenshots 2025-08-11 14:17:04 -07:00
d20061614c update default screenshot 2025-08-11 13:47:23 -07:00
289c9f7ebe add cargo install documentation and enable release artifacts in actions
add cargo install documentation and enable release artifacts in actions
2025-08-11 13:38:26 -07:00
bdfa74be54 add license for socktop_agent 2025-08-11 12:10:40 -07:00
6efdc35b19 add license so we can publish to cargo 2025-08-11 12:09:14 -07:00
20278d67f1 new feature: gpu support 2025-08-11 12:04:55 -07:00
a4f69a5f7d Add scrollbar to CPU per core area. 2025-08-10 23:32:44 -07:00
9b1643afac add screenshot and information abotu running as service. 2025-08-09 22:40:44 -07:00
a9086eac84 Create 14900KS_arch_alacritty.jpg 2025-08-08 21:27:27 -07:00
4bd6744df4 fix clippy command 2025-08-08 17:33:30 -07:00
274a485f8d fmt: apply rustfmt 2025-08-08 17:25:15 -07:00
747aef0005 Merge branch 'master' of https://github.com/jasonwitty/socktop 2025-08-08 16:51:22 -07:00
a0e17c6e22 fix build warning and add ghostty screenshot 2025-08-08 16:46:46 -07:00
19973c24d8
Update README.md 2025-08-08 14:15:28 -07:00
9a4c8b703e
Update README.md 2025-08-08 13:24:32 -07:00
968a25eaf1
Update README.md 2025-08-08 13:22:57 -07:00
13fb22c7ee add rustup installer line for lazy people 2025-08-08 13:14:33 -07:00
6c867774f7 correct path in docs 2025-08-08 13:09:53 -07:00
7c3e4a6e39 fix executable in documentation readme (socktop_agent) 2025-08-08 13:06:15 -07:00
466a32a90a add tmux instructions to readme 2025-08-08 13:00:36 -07:00
cb4882e983 patch for ci failure. 2025-08-08 12:55:48 -07:00
107 changed files with 18411 additions and 1130 deletions

39
.githooks/pre-commit Executable file
View File

@ -0,0 +1,39 @@
#!/usr/bin/env bash
# This repository uses a custom hooks directory (.githooks). To enable this pre-commit hook run:
# git config core.hooksPath .githooks
# Ensure this file is executable: chmod +x .githooks/pre-commit
set -euo pipefail
echo "[pre-commit] Running cargo fmt --all" >&2
if ! command -v cargo >/dev/null 2>&1; then
# Try loading rustup environment (common install path)
if [ -f "$HOME/.cargo/env" ]; then
# shellcheck source=/dev/null
. "$HOME/.cargo/env"
fi
fi
if ! command -v cargo >/dev/null 2>&1; then
echo "[pre-commit] cargo not found in PATH; skipping fmt (install Rust or adjust PATH)." >&2
exit 0
fi
cargo fmt --all
# Stage any Rust files that were reformatted
changed=$(git diff --name-only --diff-filter=M | grep -E '\\.rs$' || true)
if [ -n "$changed" ]; then
echo "$changed" | xargs git add
echo "[pre-commit] Added formatted files" >&2
fi
# Fail if further diffs remain (shouldn't happen normally)
unfmt=$(git diff --name-only --diff-filter=M | grep -E '\\.rs$' || true)
if [ -n "$unfmt" ]; then
echo "[pre-commit] Some Rust files still differ after formatting:" >&2
echo "$unfmt" >&2
exit 1
fi
exit 0

View File

@ -2,18 +2,128 @@ name: CI
on:
push:
pull_request:
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
with:
components: clippy, rustfmt
- name: Install system dependencies (Linux)
if: matrix.os == 'ubuntu-latest'
run: sudo apt-get update && sudo apt-get install -y libdrm-dev libdrm-amdgpu1
- name: Cargo fmt
run: cargo fmt --all -- --check
- name: Clippy
run: cargo clippy --all-targets --all-features -D warnings
- name: Build
run: cargo build --release --workspace
run: cargo clippy --all-targets --all-features -- -D warnings
- name: Build (release)
run: cargo build --release --workspace
- name: "Linux: start agent and run WS probe"
if: matrix.os == 'ubuntu-latest'
shell: bash
run: |
set -euo pipefail
RUST_LOG=info SOCKTOP_ENABLE_SSL=0 SOCKTOP_AGENT_GPU=0 SOCKTOP_AGENT_TEMP=0 ./target/release/socktop_agent -p 3000 > agent.log 2>&1 &
AGENT_PID=$!
for i in {1..60}; do
if curl -fsS http://127.0.0.1:3000/healthz >/dev/null; then break; fi
sleep 1
done
if ! curl -fsS http://127.0.0.1:3000/healthz >/dev/null; then
echo "--- agent.log (tail) ---"
tail -n 200 agent.log || true
(command -v ss >/dev/null && ss -ltnp || netstat -ltnp) || true
kill $AGENT_PID || true
exit 1
fi
SOCKTOP_WS=ws://127.0.0.1:3000/ws cargo test -p socktop_connector --test integration_test -- --nocapture
kill $AGENT_PID || true
- name: "Windows: start agent and run WS probe"
if: matrix.os == 'windows-latest'
shell: pwsh
run: |
$env:SOCKTOP_ENABLE_SSL = "0"
$env:SOCKTOP_AGENT_GPU = "0"
$env:SOCKTOP_AGENT_TEMP = "0"
$out = Join-Path $PWD "agent.out.txt"
$err = Join-Path $PWD "agent.err.txt"
$p = Start-Process -FilePath "${PWD}\target\release\socktop_agent.exe" -ArgumentList "-p 3000" -RedirectStandardOutput $out -RedirectStandardError $err -PassThru -NoNewWindow
$ready = $false
for ($i = 0; $i -lt 60; $i++) {
$pinfo = New-Object System.Diagnostics.ProcessStartInfo
$pinfo.FileName = "curl.exe"
$pinfo.Arguments = "-fsS http://127.0.0.1:3000/healthz"
$pinfo.RedirectStandardOutput = $true
$pinfo.RedirectStandardError = $true
$pinfo.UseShellExecute = $false
$proc = [System.Diagnostics.Process]::Start($pinfo)
$proc.WaitForExit()
if ($proc.ExitCode -eq 0) { $ready = $true; break }
Start-Sleep -Seconds 1
}
if (-not $ready) {
Write-Warning "TCP connect to (127.0.0.1 : 3000) failed"
if (Test-Path $out) { Write-Host "--- agent.out (full) ---"; Get-Content $out }
if (Test-Path $err) { Write-Host "--- agent.err (full) ---"; Get-Content $err }
Write-Host "--- netstat ---"
netstat -ano | Select-String ":3000" | ForEach-Object { $_.Line }
if ($p -and !$p.HasExited) { Stop-Process -Id $p.Id -Force -ErrorAction SilentlyContinue }
throw "agent did not become ready"
}
$env:SOCKTOP_WS = "ws://127.0.0.1:3000/ws"
try {
cargo test -p socktop_connector --test integration_test -- --nocapture
} finally {
if ($p -and !$p.HasExited) { Stop-Process -Id $p.Id -Force -ErrorAction SilentlyContinue }
}
- name: Smoke test (client --help)
run: cargo run -p socktop -- --help
- name: Package artifacts (Linux)
if: matrix.os == 'ubuntu-latest'
shell: bash
run: |
set -e
mkdir -p dist
cp target/release/socktop dist/
cp target/release/socktop_agent dist/
tar czf socktop-${{ matrix.os }}.tar.gz -C dist .
- name: Package artifacts (Windows)
if: matrix.os == 'windows-latest'
shell: pwsh
run: |
New-Item -ItemType Directory -Force -Path dist | Out-Null
Copy-Item target\release\socktop.exe dist\
Copy-Item target\release\socktop_agent.exe dist\
Compress-Archive -Path dist\* -DestinationPath socktop-${{ matrix.os }}.zip -Force
- name: Upload build artifacts (ephemeral)
uses: actions/upload-artifact@v4
with:
name: socktop-${{ matrix.os }}
path: |
*.tar.gz
*.zip
- name: Upload to rolling GitHub Release (main only)
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
uses: softprops/action-gh-release@v2
with:
tag_name: latest
name: Latest build
prerelease: true
draft: false
files: |
*.tar.gz
*.zip
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

6
.gitignore vendored
View File

@ -1 +1,7 @@
/target
.vscode/
/socktop-wasm-test/target
# Documentation files from development sessions (context-specific, not for public repo)
/OPTIMIZATION_PROCESS_DETAILS.md
/THREAD_SUPPORT.md

1784
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,35 +1,50 @@
[workspace]
resolver = "2"
members = [
"socktop",
"socktop_agent"
"socktop_agent",
"socktop_connector"
]
[workspace.dependencies]
# async + streams
tokio = { version = "1", features = ["full"] }
futures = "0.3"
futures-util = "0.3"
anyhow = "1.0"
# websocket
tokio-tungstenite = "0.24"
tungstenite = "0.24"
tokio-tungstenite = { version = "0.24", features = ["__rustls-tls", "connect"] }
url = "2.5"
# JSON + error handling
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
thiserror = "1.0"
# system stats
sysinfo = "0.32"
# system stats (align across crates)
sysinfo = "0.37"
# CLI UI
ratatui = "0.28"
crossterm = "0.27"
# date/time
chrono = { version = "0.4", features = ["serde"] }
# web server (remote-agent)
axum = { version = "0.7", features = ["ws"] }
# protobuf
prost = "0.13"
dirs-next = "2"
# compression
flate2 = "1.0"
# TLS
rustls = { version = "0.23", features = ["ring"] }
rustls-pemfile = "2.1"
[profile.release]
# Favor smaller, simpler binaries with good runtime perf
lto = "thin"
codegen-units = 1
panic = "abort"
opt-level = 3
strip = "symbols"

21
LICENSE Normal file
View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025 Witty One Off
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

588
README.md
View File

@ -1,201 +1,386 @@
# socktop
**socktop** is a remote system monitor with a rich TUI interface, inspired by `top` and `btop`, that communicates with a lightweight remote agent over WebSockets.
socktop is a remote system monitor with a rich TUI, inspired by top/btop, talking to a lightweight agent over WebSockets.
It lets you watch CPU, memory, disks, network, temperatures, and processes on another machine in real-time — from the comfort of your terminal.
- Linux agent: near-zero CPU when idle (request-driven, no always-on sampler)
- TUI: smooth graphs, sortable process table, scrollbars, readable colors
![socktop screenshot](./docs/socktop-screenshot.png)
<img src="./docs/socktop_demo.apng" width="100%">
---
## Features
- 📡 **Remote monitoring** via WebSocket — lightweight agent sends JSON metrics
- 🖥 **Rich TUI** built with [ratatui](https://github.com/ratatui-org/ratatui)
- 🔍 **Detailed CPU view** — per-core history, current load, and trends
- 📊 **Memory, Swap, Disk usage** — human-readable units, color-coded
- 🌡 **Temperatures** — CPU temperature with visual indicators
- 📈 **Network throughput** — live sparkline graphs with peak tracking
- 🏷 **Top processes table** — PID, name, CPU%, memory, and memory%
- 🎨 Color-coded load, zebra striping for readability
- ⌨ **Keyboard shortcuts**:
- `q` / `Esc` → Quit
- Remote monitoring via WebSocket (JSON over WS)
- Optional WSS (TLS): agent autogenerates a selfsigned cert on first run; client pins the cert via --tls-ca/-t
- TUI built with ratatui
- CPU
- Overall sparkline + per-core mini bars
- Accurate per-process CPU% (Linux /proc deltas), normalized to 0100%
- Memory/Swap gauges with human units
- Disks: per-device usage
- Network: per-interface throughput with sparklines and peak markers
- Temperatures: CPU (optional)
- Top processes (top 50)
- PID, name, CPU%, memory, and memory%
- Click-to-sort by CPU% or Mem (descending)
- Scrollbar and mouse/keyboard scrolling
- Total process count shown in the header
- Only top-level processes listed (threads hidden) — matches btop/top
- Optional GPU metrics (can be disabled)
- Optional auth token for the agent
---
## Prerequisites: Install Rust (rustup)
Rust is fast, safe, and crossplatform. Installing it will make your machine better. Consider yourself privileged.
Linux/macOS:
```bash
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# load cargo for this shell
source "$HOME/.cargo/env"
# ensure stable is up to date
rustup update stable
rustc --version
cargo --version
# after install you may need to reload your shell, e.g.:
exec bash # or: exec zsh / exec fish
```
Windows (for the brave): install from https://rustup.rs with the MSVC toolchain. Yes, youll need Visual Studio Build Tools. You chose Windows — enjoy the ride.
### Raspberry Pi / Ubuntu / PopOS (required)
Install GPU support with apt command below
```bash
sudo apt-get update
sudo apt-get install libdrm-dev libdrm-amdgpu1
```
_Additional note for Raspberry Pi users. Please update your system to use the newest kernel available through app, kernel version 6.6+ will use considerably less overall CPU to run the agent. For example on a rpi4 the kernel < 6.6 the agent will consume .8 cpu but on the same hardware on > 6.6 the agent will consume only .2 cpu. (these numbers indicate continuous polling at web socket endpoints, when not in use the usage is 0)_
---
## Architecture
`socktop` has **two components**:
Two components:
1. **Agent** (remote side)
A small Rust WebSocket server that runs on the target machine and gathers metrics via [sysinfo](https://crates.io/crates/sysinfo).
1) Agent (remote): small Rust WS server using sysinfo + /proc. It collects metrics only when the client requests them over the WebSocket (request-driven). No background sampling loop.
2. **Client** (local side)
The TUI app (`socktop`) that connects to the agents `/ws` endpoint, receives JSON metrics, and renders them.
The two communicate over a persistent WebSocket connection.
2) Client (local): TUI that connects to ws://HOST:PORT/ws (or wss://HOST:PORT/ws when TLS is enabled) and renders updates.
---
## Adaptive (idle-aware) sampling
## Quick start
The socktop agent now samples system metrics only when at least one WebSocket client is connected. When idle (no clients), the sampler sleeps and CPU usage drops to ~0%.
- Build both binaries:
How it works
- The WebSocket handler increments/decrements a client counter in `AppState` on connect/disconnect.
- A background sampler wakes when the counter transitions from 0 → >0 and sleeps when it returns to 0.
- The most recent metrics snapshot is cached as JSON for fast responses.
Cold start behavior
- If a client requests metrics while the cache is empty (e.g., just started or after a long idle), the agent performs a one-off synchronous collection to respond immediately.
Tuning
- Sampling interval (active): update `spawn_sampler(state, Duration::from_millis(500))` in `socktop_agent/src/main.rs`.
- Always-on or low-frequency idle sampling: replace the “sleep when idle” logic in `socktop_agent/src/sampler.rs` with a low-frequency interval. Example sketch:
```rust
// In sampler.rs (sketch): sample every 10s when idle, 500ms when active
let idle_period = Duration::from_secs(10);
loop {
let active = state.client_count.load(Ordering::Relaxed) > 0;
let period = if active { Duration::from_millis(500) } else { idle_period };
let mut ticker = tokio::time::interval(period);
ticker.tick().await;
if !active {
// wake early if a client connects
tokio::select! {
_ = ticker.tick() => {},
_ = state.wake_sampler.notified() => continue,
}
}
let m = collect_metrics(&state).await;
if let Ok(js) = serde_json::to_string(&m) {
*state.last_json.write().await = js;
}
}
```
---
## Installation
### Prerequisites
- Rust 1.75+ (recommended latest stable)
- Cargo package manager
### Build from source
```bash
git clone https://github.com/YOURNAME/socktop.git
git clone https://github.com/jasonwitty/socktop.git
cd socktop
cargo build --release
```
### Install as a cargo binary
- Start the agent on the target machine (default port 3000):
```bash
cargo install --path .
./target/release/socktop_agent --port 3000
```
This will install the `socktop` binary into `~/.cargo/bin`.
- Connect with the TUI from your local machine:
```bash
./target/release/socktop ws://REMOTE_HOST:3000/ws
```
### Cross-compiling for Raspberry Pi
For Raspberry Pi and other ARM devices, you can cross-compile the agent from a more powerful machine:
- [Cross-compilation guide](./docs/cross-compiling.md) - Instructions for cross-compiling from Linux, macOS, or Windows hosts
### Quick demo (no agent setup)
Spin up a temporary local agent on port 3231 and connect automatically:
```bash
socktop --demo
```
Or just run `socktop` with no arguments and pick the builtin `demo` entry from the interactive profile list (if you have saved profiles, `demo` is appended). The demo agent:
- Runs locally (`ws://127.0.0.1:3231/ws`)
- Stops automatically (you'll see "Stopped demo agent on port 3231") when you quit the TUI or press Ctrl-C
---
## Running
## Install (from crates.io)
### 1. Start the agent on the remote machine
The agent binary listens on a TCP port and serves `/ws`:
You dont need to clone this repo to use socktop. Install the published binaries with cargo:
```bash
remote_agent 0.0.0.0:8080
# TUI (client)
cargo install socktop
# Agent (server)
cargo install socktop_agent
```
> **Tip:** You can run the agent under `systemd`, inside a Docker container, or just in a tmux/screen session.
This drops socktop and socktop_agent into ~/.cargo/bin (add it to PATH).
Notes:
- After installing Rust via rustup, reload your shell (e.g., exec bash) so cargo is on PATH.
- Windows: you can also grab prebuilt EXEs from GitHub Actions artifacts if rustup scares you. It shouldnt. Be brave.
System-wide agent (Linux)
### 2. Connect with the client
From your local machine:
```bash
socktop ws://REMOTE_HOST:8080/ws
# If you installed with cargo, binaries are in ~/.cargo/bin
sudo install -o root -g root -m 0755 "$HOME/.cargo/bin/socktop_agent" /usr/local/bin/socktop_agent
# Install and enable the systemd service (example unit in docs/)
sudo install -o root -g root -m 0644 docs/socktop-agent.service /etc/systemd/system/socktop-agent.service
sudo systemctl daemon-reload
sudo systemctl enable --now socktop-agent
```
Example:
```bash
socktop ws://192.168.1.50:8080/ws
# Enable SSL
# Stop service
sudo systemctl stop socktop-agent
# Edit service to append SSL option and port
sudo micro /etc/systemd/system/socktop-agent.service
--
ExecStart=/usr/local/bin/socktop_agent --enableSSL --port 8443
--
# Reload
sudo systemctl daemon-reload
# Restart
sudo systemctl start socktop-agent
# check logs for certificate location
sudo journalctl -u socktop-agent -f
--
Aug 22 22:25:26 rpi-master socktop_agent[2913998]: socktop_agent: generated self-signed TLS certificate at /var/lib/socktop/.config/socktop_agent/tls/cert.pem
--
```
---
## Usage
When connected, `socktop` displays:
Agent (server):
**Left column:**
- **CPU avg graph** — sparkline of recent overall CPU usage
- **Memory gauge** — total and used RAM
- **Swap gauge** — total and used swap
- **Disks** — usage per device (only devices with available space > 0)
- **Network Download/Upload** — sparkline in KB/s, with current & peak values
```bash
socktop_agent --port 3000
# or env: SOCKTOP_PORT=3000 socktop_agent
# optional auth: SOCKTOP_TOKEN=changeme socktop_agent
# enable TLS (selfsigned cert, default port 8443; you can also use -p):
socktop_agent --enableSSL --port 8443
```
**Right column:**
- **Per-core history & trends** — each cores recent load, current %, and trend arrow
- **Top processes table** — top 20 processes with PID, name, CPU%, memory usage, and memory%
Client (TUI):
```bash
socktop ws://HOST:3000/ws
# with token:
socktop "ws://HOST:3000/ws?token=changeme"
# TLS with pinned server certificate (recommended over the internet):
socktop --tls-ca /path/to/cert.pem wss://HOST:8443/ws
# (By default hostname/SAN verification is skipped for ease on home networks. To enforce it add --verify-hostname)
socktop --verify-hostname --tls-ca /path/to/cert.pem wss://HOST:8443/ws
# shorthand:
socktop -t /path/to/cert.pem wss://HOST:8443/ws
# Note: providing --tls-ca/-t automatically upgrades ws:// to wss:// if you forget
```
Intervals (client-driven):
- Fast metrics: ~500 ms
- Processes: ~2 s
- Disks: ~5 s
The agent stays idle unless queried. When queried, it collects just whats needed.
---
## Configuring the agent port
## Connection Profiles (Named)
The agent listens on TCP port 3000 by default. You can override this via a CLI flag, a positional port argument, or an environment variable:
You can save frequently used connection settings (URL + optional TLS CA path) under a short name and reuse them later.
- CLI flag:
- socktop_agent --port 8080
- socktop_agent -p 8080
- Positional:
- socktop_agent 8080
- Environment variable:
- SOCKTOP_PORT=8080 socktop_agent
Config file location:
Help:
- socktop_agent --help
- Linux (XDG): `$XDG_CONFIG_HOME/socktop/profiles.json`
- Fallback (when XDG not set): `~/.config/socktop/profiles.json`
The TUI should point to ws://HOST:PORT/ws, e.g.:
- cargo run -p socktop -- ws://127.0.0.1:8080/ws
### Creating a profile
First time you specify a new `--profile/-P` name together with a URL (and optional `--tls-ca`), it is saved automatically:
```bash
socktop --profile prod ws://prod-host:3000/ws
# With TLS pinning:
socktop --profile prod-tls --tls-ca /path/to/cert.pem wss://prod-host:8443/ws
You can also set custom intervals (milliseconds):
```bash
socktop --profile prod --metrics-interval-ms 750 --processes-interval-ms 3000 ws://prod-host:3000/ws
```
```
If a profile already exists you will be prompted before overwriting:
```
$ socktop --profile prod ws://new-host:3000/ws
Overwrite existing profile 'prod'? [y/N]: y
```
To overwrite without an interactive prompt pass `--save`:
```bash
socktop --profile prod --save ws://new-host:3000/ws
```
### Using a saved profile
Just pass the profile name (no URL needed):
```bash
socktop --profile prod
socktop -P prod-tls # short flag
```
The stored URL (and TLS CA path, if any) plus any saved intervals will be used. TLS auto-upgrade still applies if a CA path is stored alongside a ws:// URL.
### Interactive selection (no args)
If you run `socktop` with no arguments and at least one profile exists, you will be shown a numbered list to pick from:
```
$ socktop
Select profile:
1. prod
2. prod-tls
Enter number (or blank to abort): 2
```
Choosing a number starts the TUI with that profile. A builtin `demo` option is always appended; selecting it launches a local agent on port 3231 (no TLS) and connects to `ws://127.0.0.1:3231/ws`. Pressing Enter on blank aborts without connecting.
### JSON format
An example `profiles.json` (prettyprinted):
```json
{
"profiles": {
"prod": { "url": "ws://prod-host:3000/ws" },
"prod-tls": {
"url": "wss://prod-host:8443/ws",
"tls_ca": "/home/user/certs/prod-cert.pem",
"metrics_interval_ms": 500,
"processes_interval_ms": 2000
}
},
"version": 0
}
```
Notes:
- The `tls_ca` path is stored as given; if you move or rotate the certificate update the profile by re-running with `--profile NAME --save`.
- Deleting a profile: edit the JSON file and remove the entry (TUI does not yet have an in-app delete command).
- Profiles are client-side convenience only; they do not affect the agent.
- Intervals: `metrics_interval_ms` controls the fast metrics poll (default 500 ms). `processes_interval_ms` controls process list polling (default 2000 ms). Values below 100 ms (metrics) or 200 ms (processes) are clamped.
---
## Keyboard Shortcuts
## Updating
| Key | Action |
|-------------|------------|
| `q` or `Esc`| Quit |
Update the agent (systemd):
```bash
# on the server running the agent
cargo install socktop_agent --force
sudo systemctl stop socktop-agent
sudo install -o root -g root -m 0755 "$HOME/.cargo/bin/socktop_agent" /usr/local/bin/socktop_agent
# if you changed the unit file:
# sudo install -o root -g root -m 0644 docs/socktop-agent.service /etc/systemd/system/socktop-agent.service
# sudo systemctl daemon-reload
sudo systemctl start socktop-agent
sudo systemctl status socktop-agent --no-pager
# logs:
# journalctl -u socktop-agent -f
```
Update the TUI (client):
```bash
cargo install socktop --force
socktop ws://HOST:3000/ws
```
Tip: If only the binary changed, restart is enough. If the unit file changed, run sudo systemctl daemon-reload.
---
## Security (optional token)
By default, the agent exposes metrics over an unauthenticated WebSocket. For untrusted networks, set an auth token and pass it in the client URL:
## Configuration (agent)
- Server:
- SOCKTOP_TOKEN=changeme socktop_agent --port 3000
- Client:
- socktop ws://HOST:3000/ws?token=changeme
- Port:
- Flag: --port 8080 or -p 8080
- Positional: socktop_agent 8080
- Env: SOCKTOP_PORT=8080
- TLS (selfsigned):
- Enable: --enableSSL
- Default TLS port: 8443 (override with --port/-p)
- Certificate/Key location (created on first TLS run):
- Linux (XDG): $XDG_CONFIG_HOME/socktop_agent/tls/{cert.pem,key.pem} (defaults to ~/.config)
- The agent prints these paths on creation.
- You can set XDG_CONFIG_HOME before first run to control where certs are written.
- Additional SANs: set `SOCKTOP_AGENT_EXTRA_SANS` (commaseparated) before first TLS start to include extra IPs/DNS names in the cert. Example:
```bash
SOCKTOP_AGENT_EXTRA_SANS="192.168.1.101,myhost.internal" socktop_agent --enableSSL
```
This prevents client errors like `NotValidForName` when connecting via an IP not present in the default cert SAN list.
- Expiry / rotation: the generated cert is valid for ~397 days from creation. If the agent fails to start with an "ExpiredCertificate" error (or your client reports expiry), simply delete the existing cert and key:
```bash
rm ~/.config/socktop_agent/tls/cert.pem ~/.config/socktop_agent/tls/key.pem
# (adjust path if XDG_CONFIG_HOME is set or different user)
systemctl restart socktop-agent # if running under systemd
```
On next TLS start the agent will generate a fresh pair. Only distribute the new cert.pem to clients (never the key).
- Auth token (optional): SOCKTOP_TOKEN=changeme
- Disable GPU metrics: SOCKTOP_AGENT_GPU=0
- Disable CPU temperature: SOCKTOP_AGENT_TEMP=0
---
## Platform notes
- Linux x86_64/AMD/Intel: fully supported.
- Raspberry Pi:
- 64-bit: rustup target add aarch64-unknown-linux-gnu; build on-device for simplicity.
- 32-bit: rustup target add armv7-unknown-linux-gnueabihf.
- Windows:
- TUI and agent build/run with stable Rust. Use PowerShell:
- cargo run -p socktop_agent -- --port 3000
- cargo run -p socktop -- ws://127.0.0.1:3000/ws
- CPU temperature may be unavailable; display will show N/A.
## Keyboard & Mouse
- Quit: q or Esc
- Processes pane:
- Click “CPU %” to sort by CPU descending
- Click “Mem” to sort by memory descending
- Mouse wheel: scroll
- Drag scrollbar: scroll
- Arrow/PageUp/PageDown/Home/End: scroll
---
## Example agent JSON
`socktop` expects the agent to send metrics in this shape:
```json
{
"cpu_total": 12.4,
"cpu_per_core": [11.2, 15.7, ...],
"cpu_per_core": [11.2, 15.7],
"mem_total": 33554432,
"mem_used": 18321408,
"swap_total": 0,
@ -207,42 +392,165 @@ By default, the agent exposes metrics over an unauthenticated WebSocket. For unt
"networks": [{"name":"eth0","received":12345678,"transmitted":87654321}],
"top_processes": [
{"pid":1234,"name":"nginx","cpu_usage":1.2,"mem_bytes":12345678}
]
],
"gpus": null
}
```
Notes:
- process_count is merged into the main metrics on the client when processes are polled.
- top_processes are the current top 50 (sorting in the TUI is client-side).
---
## Security
Set a token on the agent and pass it as a query param from the client:
Server:
```bash
SOCKTOP_TOKEN=changeme socktop_agent --port 3000
```
Client:
```bash
socktop "ws://HOST:3000/ws?token=changeme"
```
### TLS / WSS
For encrypted connections, enable TLS on the agent and pin the server certificate on the client.
Server (generates selfsigned cert and key on first run):
```bash
socktop_agent --enableSSL --port 8443
```
Client (trust/pin the server cert; copy cert.pem from the agent):
```bash
socktop --tls-ca /path/to/agent/cert.pem wss://HOST:8443/ws
```
Notes:
- Do not copy the private key off the server; only the cert.pem is needed by clients.
- When --tls-ca/-t is supplied, the client autoupgrades ws:// to wss:// to avoid protocol mismatch.
- Hostname (SAN) verification is DISABLED by default (the cert is still pinned). Use `--verify-hostname` to enable strict SAN checking.
- You can run multiple clients with different cert paths by passing --tls-ca per invocation.
---
## Using tmux to monitor multiple hosts
You can use tmux to show multiple socktop instances in a single terminal.
![socktop screenshot](./docs/tmux_4_rpis_v3.jpg)
monitoring 4 Raspberry Pis using Tmux
Prerequisites:
- Install tmux (Ubuntu/Debian: `sudo apt-get install tmux`)
Key bindings (defaults):
- Split left/right: Ctrl-b %
- Split top/bottom: Ctrl-b "
- Move between panes: Ctrl-b + Arrow keys
- Show pane numbers: Ctrl-b q
- Close a pane: Ctrl-b x
- Detach from session: Ctrl-b d
Two panes (left/right)
- This creates a session named "socktop", splits it horizontally, and starts two socktops.
```bash
tmux new-session -d -s socktop 'socktop ws://HOST1:3000/ws' \; \
split-window -h 'socktop ws://HOST2:3000/ws' \; \
select-layout even-horizontal \; \
attach
```
Four panes (top-left, top-right, bottom-left, bottom-right)
- This creates a 2x2 grid with one socktop per pane.
```bash
tmux new-session -d -s socktop 'socktop ws://HOST1:3000/ws' \; \
split-window -h 'socktop ws://HOST2:3000/ws' \; \
select-pane -t 0 \; split-window -v 'socktop ws://HOST3:3000/ws' \; \
select-pane -t 1 \; split-window -v 'socktop ws://HOST4:3000/ws' \; \
select-layout tiled \; \
attach
```
Tips:
- Replace HOST1..HOST4 (and ports) with your targets.
- Reattach later: `tmux attach -t socktop`
---
## Platform notes
- Linux: fully supported (agent and client).
- Raspberry Pi:
- 64-bit: aarch64-unknown-linux-gnu
- 32-bit: armv7-unknown-linux-gnueabihf
- Windows:
- TUI + agent can build with stable Rust; bring your own MSVC. Youre on Windows; you know the drill.
- CPU temperature may be unavailable.
- binary exe for both available in build artifacts under actions.
- macOS:
- TUI works; agent is primarily targeted at Linux. Agent will run just fine on macos for debugging but I have not documented how to run as a service, I may not given the "security" feautures with applications on macos. We will see.
---
## Development
### Run in debug mode:
```bash
cargo run -- ws://127.0.0.1:8080/ws
```
### Code formatting & lint:
```bash
cargo fmt
cargo clippy
cargo clippy --all-targets --all-features
cargo run -p socktop -- ws://127.0.0.1:3000/ws
# TLS (dev): first run will create certs under ~/.config/socktop_agent/tls/
cargo run -p socktop_agent -- --enableSSL --port 8443
```
### Auto-format on commit
A sample pre-commit hook that runs `cargo fmt --all` is provided in `.githooks/pre-commit`.
Enable it (one-time):
```bash
git config core.hooksPath .githooks
chmod +x .githooks/pre-commit
```
Every commit will then format Rust sources and restage them automatically.
---
## Roadmap
- [ ] Configurable refresh interval
- [ ] Filter/sort top processes in the TUI
- [x] Agent authentication (token)
- [x] Hide per-thread entries; only show processes
- [x] Sort top processes in the TUI
- [x] Configurable refresh intervals (client)
- [ ] Export metrics to file
- [ ] TLS / WSS support
- [ ] Agent authentication
- [x] TLS / WSS support (selfsigned server cert + client pinning)
- [x] Split processes/disks to separate WS calls with independent cadences (already logical on client; formalize API)
- [ ] Outage notifications and reconnect.
- [ ] Per process detailed statistics pane
- [ ] cleanup of Disks section, properly display physical disks / partitions, remove duplicate entries
---
## License
MIT License — see [LICENSE](LICENSE).
MIT — see LICENSE.
---
## Acknowledgements
- [`ratatui`](https://github.com/ratatui-org/ratatui) for terminal UI rendering
- [`sysinfo`](https://crates.io/crates/sysinfo) for system metrics
- [`tokio-tungstenite`](https://crates.io/crates/tokio-tungstenite) for WebSocket client/server
- ratatui for the TUI
- sysinfo for system metrics
- tokio-tungstenite for WebSockets

Binary file not shown.

After

Width:  |  Height:  |  Size: 775 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 879 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.2 MiB

BIN
docs/Win-Tel.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

204
docs/cross-compiling.md Normal file
View File

@ -0,0 +1,204 @@
# Cross-Compiling socktop_agent for Raspberry Pi
This guide explains how to cross-compile the socktop_agent on various host systems and deploy it to a Raspberry Pi. Cross-compiling is particularly useful for older or resource-constrained Pi models where native compilation might be slow.
## Cross-Compilation Host Setup
Choose your host operating system:
- [Debian/Ubuntu](#debianubuntu-based-systems)
- [Arch Linux](#arch-linux-based-systems)
- [macOS](#macos)
- [Windows](#windows)
## Debian/Ubuntu Based Systems
### Prerequisites
Install the cross-compilation toolchain for your target Raspberry Pi architecture:
```bash
# For 64-bit Raspberry Pi (aarch64)
sudo apt update
sudo apt install gcc-aarch64-linux-gnu libc6-dev-arm64-cross libdrm-dev:arm64
# For 32-bit Raspberry Pi (armv7)
sudo apt update
sudo apt install gcc-arm-linux-gnueabihf libc6-dev-armhf-cross libdrm-dev:armhf
```
### Setup Rust Cross-Compilation Targets
```bash
# For 64-bit Raspberry Pi
rustup target add aarch64-unknown-linux-gnu
# For 32-bit Raspberry Pi
rustup target add armv7-unknown-linux-gnueabihf
```
### Configure Cargo for Cross-Compilation
Create or edit `~/.cargo/config.toml`:
```toml
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
[target.armv7-unknown-linux-gnueabihf]
linker = "arm-linux-gnueabihf-gcc"
```
## Arch Linux Based Systems
### Prerequisites
Install the cross-compilation toolchain using pacman and AUR:
```bash
# Install base dependencies
sudo pacman -S base-devel
# For 64-bit Raspberry Pi (aarch64)
sudo pacman -S aarch64-linux-gnu-gcc
# Install libdrm for aarch64 using an AUR helper (e.g., yay, paru)
yay -S aarch64-linux-gnu-libdrm
# For 32-bit Raspberry Pi (armv7)
sudo pacman -S arm-linux-gnueabihf-gcc
# Install libdrm for armv7 using an AUR helper
yay -S arm-linux-gnueabihf-libdrm
```
### Setup Rust Cross-Compilation Targets
```bash
# For 64-bit Raspberry Pi
rustup target add aarch64-unknown-linux-gnu
# For 32-bit Raspberry Pi
rustup target add armv7-unknown-linux-gnueabihf
```
### Configure Cargo for Cross-Compilation
Create or edit `~/.cargo/config.toml`:
```toml
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
[target.armv7-unknown-linux-gnueabihf]
linker = "arm-linux-gnueabihf-gcc"
```
## macOS
The recommended approach for cross-compiling from macOS is to use Docker:
```bash
# Install Docker
brew install --cask docker
# Pull a cross-compilation Docker image
docker pull messense/rust-musl-cross:armv7-musleabihf # For 32-bit Pi
docker pull messense/rust-musl-cross:aarch64-musl # For 64-bit Pi
```
### Using Docker for Cross-Compilation
```bash
# Navigate to your socktop project directory
cd path/to/socktop
# For 64-bit Raspberry Pi
docker run --rm -it -v "$(pwd)":/home/rust/src messense/rust-musl-cross:aarch64-musl cargo build --release --target aarch64-unknown-linux-musl -p socktop_agent
# For 32-bit Raspberry Pi
docker run --rm -it -v "$(pwd)":/home/rust/src messense/rust-musl-cross:armv7-musleabihf cargo build --release --target armv7-unknown-linux-musleabihf -p socktop_agent
```
The compiled binaries will be available in your local target directory.
## Windows
The recommended approach for Windows is to use Windows Subsystem for Linux (WSL2):
1. Install WSL2 with a Debian/Ubuntu distribution by following the [official Microsoft documentation](https://docs.microsoft.com/en-us/windows/wsl/install).
2. Once WSL2 is set up with a Debian/Ubuntu distribution, open your WSL terminal and follow the [Debian/Ubuntu instructions](#debianubuntu-based-systems) above.
## Cross-Compile the Agent
After setting up your environment, build the socktop_agent for your target Raspberry Pi:
```bash
# For 64-bit Raspberry Pi
cargo build --release --target aarch64-unknown-linux-gnu -p socktop_agent
# For 32-bit Raspberry Pi
cargo build --release --target armv7-unknown-linux-gnueabihf -p socktop_agent
```
## Transfer the Binary to Your Raspberry Pi
Use SCP to transfer the compiled binary to your Raspberry Pi:
```bash
# For 64-bit Raspberry Pi
scp target/aarch64-unknown-linux-gnu/release/socktop_agent pi@raspberry-pi-ip:~/
# For 32-bit Raspberry Pi
scp target/armv7-unknown-linux-gnueabihf/release/socktop_agent pi@raspberry-pi-ip:~/
```
Replace `raspberry-pi-ip` with your Raspberry Pi's IP address and `pi` with your username.
## Install Dependencies on the Raspberry Pi
SSH into your Raspberry Pi and install the required dependencies:
```bash
ssh pi@raspberry-pi-ip
# For Raspberry Pi OS (Debian-based)
sudo apt update
sudo apt install libdrm-dev libdrm-amdgpu1
# For Arch Linux ARM
sudo pacman -Syu
sudo pacman -S libdrm
```
## Make the Binary Executable and Install
```bash
chmod +x ~/socktop_agent
# Optional: Install system-wide
sudo install -o root -g root -m 0755 ~/socktop_agent /usr/local/bin/socktop_agent
# Optional: Set up as a systemd service
sudo install -o root -g root -m 0644 ~/socktop-agent.service /etc/systemd/system/socktop-agent.service
sudo systemctl daemon-reload
sudo systemctl enable --now socktop-agent
```
## Troubleshooting
If you encounter issues with the cross-compiled binary:
1. **Incorrect Architecture**: Ensure you've chosen the correct target for your Raspberry Pi model:
- For Raspberry Pi 2: use `armv7-unknown-linux-gnueabihf`
- For Raspberry Pi 3/4/5 in 64-bit mode: use `aarch64-unknown-linux-gnu`
- For Raspberry Pi 3/4/5 in 32-bit mode: use `armv7-unknown-linux-gnueabihf`
2. **Dependency Issues**: Check for missing libraries:
```bash
ldd ~/socktop_agent
```
3. **Run with Backtrace**: Get detailed error information:
```bash
RUST_BACKTRACE=1 ~/socktop_agent
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

BIN
docs/macos_intel.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 616 KiB

BIN
docs/raspberry-pi..jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 MiB

BIN
docs/socktop_demo.apng Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 MiB

BIN
docs/tmux_4_rpis.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 MiB

BIN
docs/tmux_4_rpis_v2.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 MiB

BIN
docs/tmux_4_rpis_v3.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 MiB

15
proto/processes.proto Normal file
View File

@ -0,0 +1,15 @@
syntax = "proto3";
package socktop;
// All running processes. Sorting is done client-side.
message Processes {
uint64 process_count = 1; // total processes in the system
repeated Process rows = 2; // all processes
}
message Process {
uint32 pid = 1;
string name = 2;
float cpu_usage = 3; // 0..100
uint64 mem_bytes = 4; // RSS bytes
}

3
rust-toolchain.toml Normal file
View File

@ -0,0 +1,3 @@
[toolchain]
channel = "stable"
components = ["clippy", "rustfmt"]

47
scripts/check-windows.sh Normal file
View File

@ -0,0 +1,47 @@
#!/usr/bin/env bash
set -euo pipefail
# Cross-check Windows build from Linux using the GNU (MinGW) toolchain.
# - Ensures target `x86_64-pc-windows-gnu` is installed
# - Verifies MinGW cross-compiler is available (x86_64-w64-mingw32-gcc)
# - Runs cargo clippy with warnings-as-errors for the Windows target
# - Builds release binaries for the Windows target
echo "[socktop] Windows cross-check: clippy + build (GNU target)"
have() { command -v "$1" >/dev/null 2>&1; }
if ! have rustup; then
echo "error: rustup not found. Install Rust via rustup first (see README)." >&2
exit 1
fi
if ! rustup target list --installed | grep -q '^x86_64-pc-windows-gnu$'; then
echo "+ rustup target add x86_64-pc-windows-gnu"
rustup target add x86_64-pc-windows-gnu
fi
if ! have x86_64-w64-mingw32-gcc; then
echo "error: Missing MinGW cross-compiler (x86_64-w64-mingw32-gcc)." >&2
if have pacman; then
echo "Arch Linux: sudo pacman -S --needed mingw-w64-gcc" >&2
elif have apt-get; then
echo "Debian/Ubuntu: sudo apt-get install -y mingw-w64" >&2
elif have dnf; then
echo "Fedora: sudo dnf install -y mingw64-gcc" >&2
else
echo "Install the mingw-w64 toolchain for your distro, then re-run." >&2
fi
exit 1
fi
CARGO_FLAGS=(--workspace --all-targets --all-features --target x86_64-pc-windows-gnu)
echo "+ cargo clippy ${CARGO_FLAGS[*]} -- -D warnings"
cargo clippy "${CARGO_FLAGS[@]}" -- -D warnings
echo "+ cargo build --release ${CARGO_FLAGS[*]}"
cargo build --release "${CARGO_FLAGS[@]}"
echo "✅ Windows clippy and build completed successfully."

View File

@ -0,0 +1,43 @@
#!/usr/bin/env bash
set -euo pipefail
# Publish job: "publish new socktop agent version"
# Usage: ./scripts/publish_socktop_agent.sh <new_version>
if [[ ${1:-} == "" ]]; then
echo "Usage: $0 <new_version>" >&2
exit 1
fi
NEW_VERSION="$1"
ROOT_DIR=$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")/.." && pwd)
CRATE_DIR="$ROOT_DIR/socktop_agent"
echo "==> Formatting socktop_agent"
(cd "$ROOT_DIR" && cargo fmt -p socktop_agent)
echo "==> Running tests for socktop_agent"
(cd "$ROOT_DIR" && cargo test -p socktop_agent)
echo "==> Running clippy (warnings as errors) for socktop_agent"
(cd "$ROOT_DIR" && cargo clippy -p socktop_agent -- -D warnings)
echo "==> Building release for socktop_agent"
(cd "$ROOT_DIR" && cargo build -p socktop_agent --release)
echo "==> Bumping version to $NEW_VERSION in socktop_agent/Cargo.toml"
sed -i.bak -E "s/^version = \"[0-9]+\.[0-9]+\.[0-9]+\"/version = \"$NEW_VERSION\"/" "$CRATE_DIR/Cargo.toml"
rm -f "$CRATE_DIR/Cargo.toml.bak"
echo "==> Committing version bump"
(cd "$ROOT_DIR" && git add -A && git commit -m "socktop_agent: bump version to $NEW_VERSION")
CURRENT_BRANCH=$(cd "$ROOT_DIR" && git rev-parse --abbrev-ref HEAD)
echo "==> Pushing to origin $CURRENT_BRANCH"
(cd "$ROOT_DIR" && git push origin "$CURRENT_BRANCH")
echo "==> Publishing socktop_agent $NEW_VERSION to crates.io"
(cd "$ROOT_DIR" && cargo publish -p socktop_agent)
echo "==> Done: socktop_agent $NEW_VERSION published"

View File

@ -1,20 +1,27 @@
[package]
name = "socktop"
version = "0.1.0"
version = "1.50.0"
authors = ["Jason Witty <jasonpwitty+socktop@proton.me>"]
description = "Remote system monitor over WebSocket, TUI like top"
edition = "2021"
edition = "2024"
license = "MIT"
readme = "README.md"
[dependencies]
# socktop connector for agent communication
socktop_connector = "1.50.0"
tokio = { workspace = true }
tokio-tungstenite = { workspace = true }
tungstenite = { workspace = true }
futures = { workspace = true }
futures-util = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
url = { workspace = true }
ratatui = { workspace = true }
crossterm = { workspace = true }
chrono = { workspace = true }
anyhow = { workspace = true }
dirs-next = { workspace = true }
sysinfo = { workspace = true }
[dev-dependencies]
assert_cmd = "2.0"
tempfile = "3"

26
socktop/README.md Normal file
View File

@ -0,0 +1,26 @@
# socktop (client)
Minimal TUI client for the socktop remote monitoring agent.
Features:
- Connects to a socktop_agent over WebSocket / secure WebSocket
- Displays CPU, memory, swap, disks, network, processes, (optional) GPU metrics
- Selfsigned TLS cert pinning via --tls-ca
- Profile management with saved intervals
- Low CPU usage (request-driven updates)
Quick start:
```
cargo install socktop
socktop ws://HOST:3000/ws
```
With TLS (copy agent cert first):
```
socktop --tls-ca cert.pem wss://HOST:8443/ws
```
Demo mode (spawns a local agent automatically on first run prompt):
```
socktop --demo
```
Full documentation, screenshots, and advanced usage:
https://github.com/jasonwitty/socktop

View File

@ -0,0 +1,15 @@
syntax = "proto3";
package socktop;
// All running processes. Sorting is done client-side.
message Processes {
uint64 process_count = 1; // total processes in the system
repeated Process rows = 2; // all processes
}
message Process {
uint32 pid = 1;
string name = 2;
float cpu_usage = 3; // 0..100
uint64 mem_bytes = 4; // RSS bytes
}

File diff suppressed because it is too large Load Diff

View File

@ -17,7 +17,10 @@ pub struct PerCoreHistory {
impl PerCoreHistory {
pub fn new(cap: usize) -> Self {
Self { deques: Vec::new(), cap }
Self {
deques: Vec::new(),
cap,
}
}
// Ensure we have one deque per core; resize on CPU topology changes
@ -36,4 +39,4 @@ impl PerCoreHistory {
push_capped(&mut self.deques[i], val, self.cap);
}
}
}
}

6
socktop/src/lib.rs Normal file
View File

@ -0,0 +1,6 @@
//! Library surface for integration tests and reuse.
pub mod types;
// Re-export connector functionality
pub use socktop_connector::{SocktopConnector, connect_to_socktop_agent};

View File

@ -2,22 +2,432 @@
mod app;
mod history;
mod profiles;
mod retry;
mod types;
mod ui;
mod ws;
mod ui; // pure retry timing logic
use std::env;
use app::App;
use profiles::{ProfileEntry, ProfileRequest, ResolveProfile, load_profiles, save_profiles};
use std::env;
use std::io::{self, Write};
pub(crate) struct ParsedArgs {
url: Option<String>,
tls_ca: Option<String>,
profile: Option<String>,
save: bool,
demo: bool,
dry_run: bool, // hidden test helper: skip connecting
metrics_interval_ms: Option<u64>,
processes_interval_ms: Option<u64>,
verify_hostname: bool,
}
pub(crate) fn parse_args<I: IntoIterator<Item = String>>(args: I) -> Result<ParsedArgs, String> {
let mut it = args.into_iter();
let prog = it.next().unwrap_or_else(|| "socktop".into());
let mut url: Option<String> = None;
let mut tls_ca: Option<String> = None;
let mut profile: Option<String> = None;
let mut save = false;
let mut demo = false;
let mut dry_run = false;
let mut metrics_interval_ms: Option<u64> = None;
let mut processes_interval_ms: Option<u64> = None;
let mut verify_hostname = false;
while let Some(arg) = it.next() {
match arg.as_str() {
"-h" | "--help" => {
return Err(format!(
"Usage: {prog} [--tls-ca CERT_PEM|-t CERT_PEM] [--verify-hostname] [--profile NAME|-P NAME] [--save] [--demo] [--metrics-interval-ms N] [--processes-interval-ms N] [ws://HOST:PORT/ws]\n"
));
}
"--tls-ca" | "-t" => {
tls_ca = it.next();
}
"--verify-hostname" => {
// opt-in hostname (SAN) verification
// default behavior is to skip it for easier home network usage
// (still pins the provided certificate)
verify_hostname = true;
}
"--profile" | "-P" => {
profile = it.next();
}
"--save" => {
save = true;
}
"--demo" => {
demo = true;
}
"--dry-run" => {
// intentionally undocumented
dry_run = true;
}
"--metrics-interval-ms" => {
metrics_interval_ms = it.next().and_then(|v| v.parse().ok());
}
"--processes-interval-ms" => {
processes_interval_ms = it.next().and_then(|v| v.parse().ok());
}
_ if arg.starts_with("--tls-ca=") => {
if let Some((_, v)) = arg.split_once('=')
&& !v.is_empty()
{
tls_ca = Some(v.to_string());
}
}
_ if arg.starts_with("--profile=") => {
if let Some((_, v)) = arg.split_once('=')
&& !v.is_empty()
{
profile = Some(v.to_string());
}
}
_ if arg.starts_with("--metrics-interval-ms=") => {
if let Some((_, v)) = arg.split_once('=') {
metrics_interval_ms = v.parse().ok();
}
}
_ if arg.starts_with("--processes-interval-ms=") => {
if let Some((_, v)) = arg.split_once('=') {
processes_interval_ms = v.parse().ok();
}
}
_ => {
if url.is_none() {
url = Some(arg);
} else {
return Err(format!(
"Unexpected argument. Usage: {prog} [--tls-ca CERT_PEM|-t CERT_PEM] [--verify-hostname] [--profile NAME|-P NAME] [--save] [--demo] [ws://HOST:PORT/ws]"
));
}
}
}
}
Ok(ParsedArgs {
url,
tls_ca,
profile,
save,
demo,
dry_run,
metrics_interval_ms,
processes_interval_ms,
verify_hostname,
})
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let args: Vec<String> = env::args().collect();
if args.len() < 2 {
eprintln!("Usage: {} ws://HOST:PORT/ws", args[0]);
std::process::exit(1);
}
let url = args[1].clone();
let parsed = match parse_args(env::args()) {
Ok(v) => v,
Err(msg) => {
eprintln!("{msg}");
return Ok(());
}
};
//support version flag (print and exit)
if env::args().any(|a| a == "--version" || a == "-V") {
println!("socktop {}", env!("CARGO_PKG_VERSION"));
return Ok(());
}
if parsed.demo || matches!(parsed.profile.as_deref(), Some("demo")) {
return run_demo_mode(parsed.tls_ca.as_deref()).await;
}
let profiles_file = load_profiles();
let req = ProfileRequest {
profile_name: parsed.profile.clone(),
url: parsed.url.clone(),
tls_ca: parsed.tls_ca.clone(),
};
let resolved = req.resolve(&profiles_file);
let mut profiles_mut = profiles_file.clone();
let (url, tls_ca, metrics_interval_ms, processes_interval_ms): (
String,
Option<String>,
Option<u64>,
Option<u64>,
) = match resolved {
ResolveProfile::Direct(u, t) => {
if let Some(name) = parsed.profile.as_ref() {
let existing = profiles_mut.profiles.get(name);
match existing {
None => {
let (mi, pi) = gather_intervals(
parsed.metrics_interval_ms,
parsed.processes_interval_ms,
)?;
profiles_mut.profiles.insert(
name.clone(),
ProfileEntry {
url: u.clone(),
tls_ca: t.clone(),
metrics_interval_ms: mi,
processes_interval_ms: pi,
},
);
let _ = save_profiles(&profiles_mut);
(u, t, mi, pi)
}
Some(entry) => {
let changed = entry.url != u || entry.tls_ca != t;
if changed {
let overwrite = if parsed.save {
true
} else {
prompt_yes_no(&format!(
"Overwrite existing profile '{name}'? [y/N]: "
))
};
if overwrite {
let (mi, pi) = gather_intervals(
parsed.metrics_interval_ms,
parsed.processes_interval_ms,
)?;
profiles_mut.profiles.insert(
name.clone(),
ProfileEntry {
url: u.clone(),
tls_ca: t.clone(),
metrics_interval_ms: mi,
processes_interval_ms: pi,
},
);
let _ = save_profiles(&profiles_mut);
(u, t, mi, pi)
} else {
(u, t, entry.metrics_interval_ms, entry.processes_interval_ms)
}
} else {
(u, t, entry.metrics_interval_ms, entry.processes_interval_ms)
}
}
}
} else {
(
u,
t,
parsed.metrics_interval_ms,
parsed.processes_interval_ms,
)
}
}
ResolveProfile::Loaded(u, t) => {
let entry = profiles_mut
.profiles
.get(parsed.profile.as_ref().unwrap())
.unwrap();
(u, t, entry.metrics_interval_ms, entry.processes_interval_ms)
}
ResolveProfile::PromptSelect(mut names) => {
if !names.iter().any(|n: &String| n == "demo") {
names.push("demo".into());
}
eprintln!("Select profile:");
for (i, n) in names.iter().enumerate() {
eprintln!(" {}. {}", i + 1, n);
}
eprint!("Enter number (or blank to abort): ");
let _ = io::stderr().flush();
let mut line = String::new();
if io::stdin().read_line(&mut line).is_ok() {
if let Ok(idx) = line.trim().parse::<usize>() {
if (1..=names.len()).contains(&idx) {
let name = &names[idx - 1];
if name == "demo" {
return run_demo_mode(parsed.tls_ca.as_deref()).await;
}
if let Some(entry) = profiles_mut.profiles.get(name) {
(
entry.url.clone(),
entry.tls_ca.clone(),
entry.metrics_interval_ms,
entry.processes_interval_ms,
)
} else {
return Ok(());
}
} else {
return Ok(());
}
} else {
return Ok(());
}
} else {
return Ok(());
}
}
ResolveProfile::PromptCreate(name) => {
eprintln!("Profile '{name}' does not exist yet.");
let url = prompt_string("Enter URL (ws://HOST:PORT/ws or wss://...): ")?;
if url.trim().is_empty() {
return Ok(());
}
let ca = prompt_string("Enter TLS CA path (or leave blank): ")?;
let ca_opt = if ca.trim().is_empty() {
None
} else {
Some(ca.trim().to_string())
};
let (mi, pi) =
gather_intervals(parsed.metrics_interval_ms, parsed.processes_interval_ms)?;
profiles_mut.profiles.insert(
name.clone(),
ProfileEntry {
url: url.trim().to_string(),
tls_ca: ca_opt.clone(),
metrics_interval_ms: mi,
processes_interval_ms: pi,
},
);
let _ = save_profiles(&profiles_mut);
(url.trim().to_string(), ca_opt, mi, pi)
}
ResolveProfile::None => {
//eprintln!("No URL provided and no profiles to select.");
//first run, no args, no profiles: show welcome message and offer demo mode
if profiles_mut.profiles.is_empty() && parsed.url.is_none() {
eprintln!("Welcome to socktop!");
eprintln!("It looks like this is your first time running the application.");
eprintln!(
"You can connect to a socktop_agent instance to monitor system metrics and processes."
);
eprintln!("If you don't have an agent running, you can try the demo mode.");
if prompt_yes_no("Would you like to start the demo mode now? [Y/n]: ") {
return run_demo_mode(parsed.tls_ca.as_deref()).await;
} else {
eprintln!("Aborting. You can run 'socktop --help' for usage information.");
return Ok(());
}
}
return Err("No URL provided and no profiles to select.".into());
}
};
let is_tls = url.starts_with("wss://");
let has_token = url.contains("token=");
let mut app = App::new()
.with_intervals(metrics_interval_ms, processes_interval_ms)
.with_status(is_tls, has_token);
if parsed.dry_run {
return Ok(());
}
app.run(&url, tls_ca.as_deref(), parsed.verify_hostname)
.await
}
fn prompt_yes_no(prompt: &str) -> bool {
eprint!("{prompt}");
let _ = io::stderr().flush();
let mut line = String::new();
if io::stdin().read_line(&mut line).is_ok() {
matches!(line.trim().to_ascii_lowercase().as_str(), "y" | "yes")
} else {
false
}
}
fn prompt_string(prompt: &str) -> io::Result<String> {
eprint!("{prompt}");
let _ = io::stderr().flush();
let mut line = String::new();
io::stdin().read_line(&mut line)?;
Ok(line)
}
fn gather_intervals(
arg_metrics: Option<u64>,
arg_procs: Option<u64>,
) -> Result<(Option<u64>, Option<u64>), Box<dyn std::error::Error>> {
let default_metrics = 500u64;
let default_procs = 2000u64;
let metrics = match arg_metrics {
Some(v) => Some(v),
None => {
let inp = prompt_string(&format!(
"Metrics interval ms (default {default_metrics}, Enter for default): "
))?;
let t = inp.trim();
if t.is_empty() {
Some(default_metrics)
} else {
Some(t.parse()?)
}
}
};
let procs = match arg_procs {
Some(v) => Some(v),
None => {
let inp = prompt_string(&format!(
"Processes interval ms (default {default_procs}, Enter for default): "
))?;
let t = inp.trim();
if t.is_empty() {
Some(default_procs)
} else {
Some(t.parse()?)
}
}
};
Ok((metrics, procs))
}
// Demo mode implementation
async fn run_demo_mode(_tls_ca: Option<&str>) -> Result<(), Box<dyn std::error::Error>> {
let port = 3231;
let url = format!("ws://127.0.0.1:{port}/ws");
let child = spawn_demo_agent(port)?;
let mut app = App::new();
app.run(&url).await
}
// Demo mode connects to localhost, so disable hostname verification
tokio::select! { res=app.run(&url,None,false)=>{ drop(child); res } _=tokio::signal::ctrl_c()=>{ drop(child); Ok(()) } }
}
struct DemoGuard {
port: u16,
child: std::sync::Arc<std::sync::Mutex<Option<std::process::Child>>>,
}
impl Drop for DemoGuard {
fn drop(&mut self) {
if let Some(mut ch) = self.child.lock().unwrap().take() {
let _ = ch.kill();
}
eprintln!("Stopped demo agent on port {}", self.port);
}
}
fn spawn_demo_agent(port: u16) -> Result<DemoGuard, Box<dyn std::error::Error>> {
let candidate = find_agent_executable();
let mut cmd = std::process::Command::new(candidate);
cmd.arg("--port").arg(port.to_string());
cmd.env("SOCKTOP_ENABLE_SSL", "0");
//JW: do not disable GPU and TEMP in demo mode
//cmd.env("SOCKTOP_AGENT_GPU", "0");
//cmd.env("SOCKTOP_AGENT_TEMP", "0");
let child = cmd.spawn()?;
std::thread::sleep(std::time::Duration::from_millis(300));
Ok(DemoGuard {
port,
child: std::sync::Arc::new(std::sync::Mutex::new(Some(child))),
})
}
fn find_agent_executable() -> std::path::PathBuf {
if let Ok(exe) = std::env::current_exe()
&& let Some(parent) = exe.parent()
{
#[cfg(windows)]
let name = "socktop_agent.exe";
#[cfg(not(windows))]
let name = "socktop_agent";
let candidate = parent.join(name);
if candidate.exists() {
return candidate;
}
}
std::path::PathBuf::from("socktop_agent")
}

103
socktop/src/profiles.rs Normal file
View File

@ -0,0 +1,103 @@
//! Connection profiles: load/save simple JSON mapping of profile name -> { url, tls_ca }
//! Stored under XDG config dir: $XDG_CONFIG_HOME/socktop/profiles.json (fallback ~/.config/socktop/profiles.json)
use serde::{Deserialize, Serialize};
use std::{collections::BTreeMap, fs, path::PathBuf};
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct ProfileEntry {
pub url: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub tls_ca: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub metrics_interval_ms: Option<u64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub processes_interval_ms: Option<u64>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct ProfilesFile {
#[serde(default)]
pub profiles: BTreeMap<String, ProfileEntry>,
#[serde(default)]
pub version: u32,
}
pub fn config_dir() -> PathBuf {
if let Some(xdg) = std::env::var_os("XDG_CONFIG_HOME") {
PathBuf::from(xdg).join("socktop")
} else {
dirs_next::config_dir()
.unwrap_or_else(|| PathBuf::from("."))
.join("socktop")
}
}
pub fn profiles_path() -> PathBuf {
config_dir().join("profiles.json")
}
pub fn load_profiles() -> ProfilesFile {
let path = profiles_path();
match fs::read_to_string(&path) {
Ok(s) => serde_json::from_str(&s).unwrap_or_default(),
Err(_) => ProfilesFile::default(),
}
}
pub fn save_profiles(p: &ProfilesFile) -> std::io::Result<()> {
let path = profiles_path();
if let Some(parent) = path.parent() {
fs::create_dir_all(parent)?;
}
let data = serde_json::to_vec_pretty(p).expect("serialize profiles");
fs::write(path, data)
}
pub enum ResolveProfile {
/// Use the provided runtime inputs (not persisted). (url, tls_ca)
Direct(String, Option<String>),
/// Loaded from existing profile entry (url, tls_ca)
Loaded(String, Option<String>),
/// Should prompt user to select among profile names
PromptSelect(Vec<String>),
/// Should prompt user to create a new profile (name)
PromptCreate(String),
/// No profile could be resolved (e.g., missing arguments)
None,
}
pub struct ProfileRequest {
pub profile_name: Option<String>,
pub url: Option<String>,
pub tls_ca: Option<String>,
}
impl ProfileRequest {
pub fn resolve(self, pf: &ProfilesFile) -> ResolveProfile {
// Case: only profile name given -> try load
if self.url.is_none() && self.profile_name.is_some() {
let Some(name) = self.profile_name else {
unreachable!("Already checked profile_name.is_some()")
};
let Some(entry) = pf.profiles.get(&name) else {
return ResolveProfile::PromptCreate(name);
};
return ResolveProfile::Loaded(entry.url.clone(), entry.tls_ca.clone());
}
// Both provided -> direct (maybe later saved by caller)
if let Some(u) = self.url {
return ResolveProfile::Direct(u, self.tls_ca);
}
// Nothing provided -> maybe prompt select if profiles exist
if self.url.is_none() && self.profile_name.is_none() {
if pf.profiles.is_empty() {
ResolveProfile::None
} else {
ResolveProfile::PromptSelect(pf.profiles.keys().cloned().collect())
}
} else {
ResolveProfile::None
}
}
}

114
socktop/src/retry.rs Normal file
View File

@ -0,0 +1,114 @@
//! Pure retry timing logic (decoupled from App state / UI) for testability.
use std::time::{Duration, Instant};
/// Result of computing retry timing.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct RetryTiming {
pub should_retry_now: bool,
/// Seconds until next retry (Some(0) means ready now); None means inactive/no countdown.
pub seconds_until_retry: Option<u64>,
}
/// Compute retry timing given connection state inputs.
///
/// Inputs:
/// - `disconnected`: true when connection_state == Disconnected.
/// - `modal_active`: requires the connection error modal be visible to show countdown / trigger auto retry.
/// - `original_disconnect_time`: time we first noticed disconnect.
/// - `last_auto_retry`: time we last performed an automatic retry.
/// - `now`: current time (injected for determinism / tests).
/// - `interval`: retry interval duration.
pub(crate) fn compute_retry_timing(
disconnected: bool,
modal_active: bool,
original_disconnect_time: Option<Instant>,
last_auto_retry: Option<Instant>,
now: Instant,
interval: Duration,
) -> RetryTiming {
if !disconnected || !modal_active {
return RetryTiming {
should_retry_now: false,
seconds_until_retry: None,
};
}
let baseline = match last_auto_retry.or(original_disconnect_time) {
Some(b) => b,
None => {
return RetryTiming {
should_retry_now: false,
seconds_until_retry: None,
};
}
};
let elapsed = now.saturating_duration_since(baseline);
if elapsed >= interval {
RetryTiming {
should_retry_now: true,
seconds_until_retry: Some(0),
}
} else {
let remaining = interval - elapsed;
RetryTiming {
should_retry_now: false,
seconds_until_retry: Some(remaining.as_secs()),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn inactive_when_not_disconnected() {
let now = Instant::now();
let rt = compute_retry_timing(false, true, Some(now), None, now, Duration::from_secs(30));
assert!(!rt.should_retry_now);
assert_eq!(rt.seconds_until_retry, None);
}
#[test]
fn countdown_progress_and_ready() {
let base = Instant::now();
let rt1 = compute_retry_timing(
true,
true,
Some(base),
None,
base + Duration::from_secs(10),
Duration::from_secs(30),
);
assert!(!rt1.should_retry_now);
assert_eq!(rt1.seconds_until_retry, Some(20));
let rt2 = compute_retry_timing(
true,
true,
Some(base),
None,
base + Duration::from_secs(30),
Duration::from_secs(30),
);
assert!(rt2.should_retry_now);
assert_eq!(rt2.seconds_until_retry, Some(0));
}
#[test]
fn uses_last_auto_retry_as_baseline() {
let base: Instant = Instant::now();
let last = base + Duration::from_secs(30); // one prior retry
// 10s after last retry => 20s remaining
let rt = compute_retry_timing(
true,
true,
Some(base),
Some(last),
last + Duration::from_secs(10),
Duration::from_secs(30),
);
assert!(!rt.should_retry_now);
assert_eq!(rt.seconds_until_retry, Some(20));
}
}

View File

@ -1,41 +1,4 @@
//! Types that mirror the agent's JSON schema.
use serde::Deserialize;
#[derive(Debug, Deserialize, Clone)]
pub struct Disk {
pub name: String,
pub total: u64,
pub available: u64,
}
#[derive(Debug, Deserialize, Clone)]
pub struct Network {
// cumulative totals; client diffs to compute rates
pub received: u64,
pub transmitted: u64,
}
#[derive(Debug, Deserialize, Clone)]
pub struct ProcessInfo {
pub pid: u32,
pub name: String,
pub cpu_usage: f32,
pub mem_bytes: u64,
}
#[derive(Debug, Deserialize, Clone)]
pub struct Metrics {
pub cpu_total: f32,
pub cpu_per_core: Vec<f32>,
pub mem_total: u64,
pub mem_used: u64,
pub swap_total: u64,
pub swap_used: u64,
pub process_count: usize,
pub hostname: String,
pub cpu_temp_c: Option<f32>,
pub disks: Vec<Disk>,
pub networks: Vec<Network>,
pub top_processes: Vec<ProcessInfo>,
}
// Re-export commonly used types from socktop_connector
pub use socktop_connector::Metrics;

File diff suppressed because it is too large Load Diff

View File

@ -1,66 +1,370 @@
//! CPU average sparkline + per-core mini bars.
use crate::ui::theme::{SB_ARROW, SB_THUMB, SB_TRACK};
use crossterm::event::{KeyCode, KeyEvent, MouseButton, MouseEvent, MouseEventKind};
use ratatui::style::Modifier;
use ratatui::style::{Color, Style};
use ratatui::{
layout::{Constraint, Direction, Layout, Rect},
style::{Color, Style},
text::{Line, Span},
widgets::{Block, Borders, Paragraph, Sparkline},
};
use ratatui::style::Modifier;
use crate::history::PerCoreHistory;
use crate::types::Metrics;
/// State for dragging the scrollbar thumb
#[derive(Clone, Copy, Debug, Default)]
pub struct PerCoreScrollDrag {
pub active: bool,
pub start_y: u16, // mouse row where drag started
pub start_top: usize, // thumb top (in track rows) at drag start
}
/// Returns the content area for per-core CPU bars, excluding borders and reserving space for scrollbar.
pub fn per_core_content_area(area: Rect) -> Rect {
// Inner minus borders
let inner = Rect {
x: area.x + 1,
y: area.y + 1,
width: area.width.saturating_sub(2),
height: area.height.saturating_sub(2),
};
// Reserve 1 column on the right for a gutter and 1 for the scrollbar
Rect {
x: inner.x,
y: inner.y,
width: inner.width.saturating_sub(2),
height: inner.height,
}
}
/// Handles key events for per-core CPU bars.
pub fn per_core_handle_key(scroll_offset: &mut usize, key: KeyEvent, page_size: usize) {
match key.code {
KeyCode::Left => *scroll_offset = scroll_offset.saturating_sub(1),
KeyCode::Right => *scroll_offset = scroll_offset.saturating_add(1),
KeyCode::PageUp => {
let step = page_size.max(1);
*scroll_offset = scroll_offset.saturating_sub(step);
}
KeyCode::PageDown => {
let step = page_size.max(1);
*scroll_offset = scroll_offset.saturating_add(step);
}
KeyCode::Home => *scroll_offset = 0,
KeyCode::End => *scroll_offset = usize::MAX, // draw() clamps to max
_ => {}
}
}
/// Handles mouse wheel over the content.
pub fn per_core_handle_mouse(
scroll_offset: &mut usize,
mouse: MouseEvent,
content_area: Rect,
page_size: usize,
) {
let inside = mouse.column >= content_area.x
&& mouse.column < content_area.x + content_area.width
&& mouse.row >= content_area.y
&& mouse.row < content_area.y + content_area.height;
if !inside {
return;
}
match mouse.kind {
MouseEventKind::ScrollUp => *scroll_offset = scroll_offset.saturating_sub(1),
MouseEventKind::ScrollDown => *scroll_offset = scroll_offset.saturating_add(1),
// Optional paging via horizontal wheel
MouseEventKind::ScrollLeft => {
let step = page_size.max(1);
*scroll_offset = scroll_offset.saturating_sub(step);
}
MouseEventKind::ScrollRight => {
let step = page_size.max(1);
*scroll_offset = scroll_offset.saturating_add(step);
}
_ => {}
}
}
/// Handles mouse interaction with the scrollbar itself (click arrows/page/drag).
pub fn per_core_handle_scrollbar_mouse(
scroll_offset: &mut usize,
drag: &mut Option<PerCoreScrollDrag>,
mouse: MouseEvent,
per_core_area: Rect,
total_rows: usize,
) {
// Geometry
let inner = Rect {
x: per_core_area.x + 1,
y: per_core_area.y + 1,
width: per_core_area.width.saturating_sub(2),
height: per_core_area.height.saturating_sub(2),
};
if inner.height < 3 || inner.width < 1 {
return;
}
let content = Rect {
x: inner.x,
y: inner.y,
width: inner.width.saturating_sub(2),
height: inner.height,
};
let scroll_area = Rect {
x: inner.x + inner.width.saturating_sub(1),
y: inner.y,
width: 1,
height: inner.height,
};
let viewport_rows = content.height as usize;
let total = total_rows.max(1);
let view = viewport_rows.clamp(1, total);
let max_off = total.saturating_sub(view);
let mut offset = (*scroll_offset).min(max_off);
// Track and current thumb
let track = (scroll_area.height - 2) as usize;
if track == 0 {
return;
}
let thumb_len = (track * view).div_ceil(total).max(1).min(track);
let top_for_offset = |off: usize| -> usize {
if max_off == 0 {
0
} else {
((track - thumb_len) * off + max_off / 2) / max_off
}
};
let thumb_top = top_for_offset(offset);
let inside_scrollbar = mouse.column == scroll_area.x
&& mouse.row >= scroll_area.y
&& mouse.row < scroll_area.y + scroll_area.height;
// Helper to page
let page_up = || offset.saturating_sub(view.max(1));
let page_down = || offset.saturating_add(view.max(1));
match mouse.kind {
MouseEventKind::Down(MouseButton::Left) if inside_scrollbar => {
// Where within the track?
let row = mouse.row;
if row == scroll_area.y {
// Top arrow
offset = offset.saturating_sub(1);
} else if row + 1 == scroll_area.y + scroll_area.height {
// Bottom arrow
offset = offset.saturating_add(1);
} else {
// Inside track
let rel = (row - (scroll_area.y + 1)) as usize;
let thumb_end = thumb_top + thumb_len;
if rel < thumb_top {
// Page up
offset = page_up();
} else if rel >= thumb_end {
// Page down
offset = page_down();
} else {
// Start dragging
*drag = Some(PerCoreScrollDrag {
active: true,
start_y: row,
start_top: thumb_top,
});
}
}
}
MouseEventKind::Drag(MouseButton::Left) => {
if let Some(mut d) = drag.take()
&& d.active
{
let dy = (mouse.row as i32) - (d.start_y as i32);
let new_top = (d.start_top as i32 + dy)
.clamp(0, (track.saturating_sub(thumb_len)) as i32)
as usize;
// Inverse mapping top -> offset
if track > thumb_len {
let denom = track - thumb_len;
offset = if max_off == 0 {
0
} else {
(new_top * max_off + denom / 2) / denom
};
} else {
offset = 0;
}
// Keep dragging
d.start_top = new_top;
d.start_y = mouse.row;
*drag = Some(d);
}
}
MouseEventKind::Up(MouseButton::Left) => {
// End drag
*drag = None;
}
// Also allow wheel scrolling when cursor is over the scrollbar
MouseEventKind::ScrollUp if inside_scrollbar => {
offset = offset.saturating_sub(1);
}
MouseEventKind::ScrollDown if inside_scrollbar => {
offset = offset.saturating_add(1);
}
_ => {}
}
// Clamp and write back
if offset > max_off {
offset = max_off;
}
*scroll_offset = offset;
}
/// Clamp scroll offset to the valid range given content and viewport.
pub fn per_core_clamp(scroll_offset: &mut usize, total_rows: usize, viewport_rows: usize) {
let max_offset = total_rows.saturating_sub(viewport_rows);
if *scroll_offset > max_offset {
*scroll_offset = max_offset;
}
}
/// Draws the CPU average sparkline graph.
pub fn draw_cpu_avg_graph(
f: &mut ratatui::Frame<'_>,
area: Rect,
hist: &std::collections::VecDeque<u64>,
m: Option<&Metrics>,
) {
let title = if let Some(mm) = m { format!("CPU avg (now: {:>5.1}%)", mm.cpu_total) } else { "CPU avg".into() };
// Calculate average CPU over the monitoring period
let avg_cpu = if !hist.is_empty() {
let sum: u64 = hist.iter().sum();
sum as f64 / hist.len() as f64
} else {
0.0
};
let title = if let Some(mm) = m {
format!("CPU (now: {:>5.1}% | avg: {:>5.1}%)", mm.cpu_total, avg_cpu)
} else {
"CPU avg".into()
};
// Build the top-right info (CPU temp and polling intervals)
let top_right_info = if let Some(mm) = m {
mm.cpu_temp_c
.map(|t| {
let icon = if t < 50.0 {
"😎"
} else if t < 85.0 {
"⚠️"
} else {
"🔥"
};
format!("CPU Temp: {t:.1}°C {icon}")
})
.unwrap_or_else(|| "CPU Temp: N/A".into())
} else {
String::new()
};
let max_points = area.width.saturating_sub(2) as usize;
let start = hist.len().saturating_sub(max_points);
let data: Vec<u64> = hist.iter().skip(start).cloned().collect();
// Render the sparkline with title on left
let spark = Sparkline::default()
.block(Block::default().borders(Borders::ALL).title(title))
.data(&data)
.max(100)
.style(Style::default().fg(Color::Cyan));
f.render_widget(spark, area);
// Render the top-right info as text overlay in the top-right corner
if !top_right_info.is_empty() {
let info_area = Rect {
x: area.x + area.width.saturating_sub(top_right_info.len() as u16 + 2),
y: area.y,
width: top_right_info.len() as u16 + 1,
height: 1,
};
let info_line = Line::from(Span::raw(top_right_info));
f.render_widget(Paragraph::new(info_line), info_area);
}
}
/// Draws the per-core CPU bars with sparklines and trends.
pub fn draw_per_core_bars(
f: &mut ratatui::Frame<'_>,
area: Rect,
m: Option<&Metrics>,
per_core_hist: &PerCoreHistory,
scroll_offset: usize,
) {
f.render_widget(Block::default().borders(Borders::ALL).title("Per-core"), area);
let Some(mm) = m else { return; };
f.render_widget(
Block::default().borders(Borders::ALL).title("Per-core"),
area,
);
let Some(mm) = m else {
return;
};
let inner = Rect { x: area.x + 1, y: area.y + 1, width: area.width.saturating_sub(2), height: area.height.saturating_sub(2) };
if inner.height == 0 { return; }
// Compute inner rect and content area
let inner = Rect {
x: area.x + 1,
y: area.y + 1,
width: area.width.saturating_sub(2),
height: area.height.saturating_sub(2),
};
if inner.height == 0 || inner.width <= 2 {
return;
}
let content = Rect {
x: inner.x,
y: inner.y,
width: inner.width.saturating_sub(2),
height: inner.height,
};
let total_rows = mm.cpu_per_core.len();
let viewport_rows = content.height as usize;
let max_offset = total_rows.saturating_sub(viewport_rows);
let offset = scroll_offset.min(max_offset);
let show_n = total_rows.saturating_sub(offset).min(viewport_rows);
let rows = inner.height as usize;
let show_n = rows.min(mm.cpu_per_core.len());
let constraints: Vec<Constraint> = (0..show_n).map(|_| Constraint::Length(1)).collect();
let vchunks = Layout::default().direction(Direction::Vertical).constraints(constraints).split(inner);
let vchunks = Layout::default()
.direction(Direction::Vertical)
.constraints(constraints)
.split(content);
for i in 0..show_n {
let idx = offset + i;
let rect = vchunks[i];
let hchunks = Layout::default()
.direction(Direction::Horizontal)
.constraints([Constraint::Min(6), Constraint::Length(12)])
.split(rect);
let curr = mm.cpu_per_core[i].clamp(0.0, 100.0);
let older = per_core_hist.deques.get(i)
let curr = mm.cpu_per_core[idx].clamp(0.0, 100.0);
let older = per_core_hist
.deques
.get(idx)
.and_then(|d| d.iter().rev().nth(20).copied())
.map(|v| v as f32)
.unwrap_or(curr);
let trend = if curr > older + 0.2 { "" }
else if curr + 0.2 < older { "" }
else { "" };
let trend = if curr > older + 0.2 {
""
} else if curr + 0.2 < older {
""
} else {
""
};
let fg = match curr {
x if x < 25.0 => Color::Green,
@ -70,7 +374,7 @@ pub fn draw_per_core_bars(
let hist: Vec<u64> = per_core_hist
.deques
.get(i)
.get(idx)
.map(|d| {
let max_points = hchunks[0].width as usize;
let start = d.len().saturating_sub(max_points);
@ -82,10 +386,49 @@ pub fn draw_per_core_bars(
.data(&hist)
.max(100)
.style(Style::default().fg(fg));
f.render_widget(spark, hchunks[0]);
let label = format!("cpu{:<2}{}{:>5.1}%", i, trend, curr);
let line = Line::from(Span::styled(label, Style::default().fg(fg).add_modifier(Modifier::BOLD)));
let label = format!("cpu{idx:<2}{trend}{curr:>5.1}%");
let line = Line::from(Span::styled(
label,
Style::default().fg(fg).add_modifier(Modifier::BOLD),
));
f.render_widget(Paragraph::new(line).right_aligned(), hchunks[1]);
}
}
// Custom 1-col scrollbar with arrows, track, and exact mapping
let scroll_area = Rect {
x: inner.x + inner.width.saturating_sub(1),
y: inner.y,
width: 1,
height: inner.height,
};
if scroll_area.height >= 3 {
let track = (scroll_area.height - 2) as usize;
let total = total_rows.max(1);
let view = viewport_rows.clamp(1, total);
let max_off = total.saturating_sub(view);
let thumb_len = (track * view).div_ceil(total).max(1).min(track);
let thumb_top = if max_off == 0 {
0
} else {
((track - thumb_len) * offset + max_off / 2) / max_off
};
// Build lines: top arrow, track (with thumb), bottom arrow
let mut lines: Vec<Line> = Vec::with_capacity(scroll_area.height as usize);
lines.push(Line::from(Span::styled("", Style::default().fg(SB_ARROW))));
for i in 0..track {
if i >= thumb_top && i < thumb_top + thumb_len {
lines.push(Line::from(Span::styled("", Style::default().fg(SB_THUMB))));
} else {
lines.push(Line::from(Span::styled("", Style::default().fg(SB_TRACK))));
}
}
lines.push(Line::from(Span::styled("", Style::default().fg(SB_ARROW))));
f.render_widget(Paragraph::new(lines), scroll_area);
}
}

View File

@ -1,16 +1,18 @@
//! Disk cards with per-device gauge and title line.
use crate::types::Metrics;
use crate::ui::util::{disk_icon, human, truncate_middle};
use ratatui::{
layout::{Constraint, Direction, Layout, Rect},
style::Style,
widgets::{Block, Borders, Gauge},
};
use crate::types::Metrics;
use crate::ui::util::{human, truncate_middle, disk_icon};
pub fn draw_disks(f: &mut ratatui::Frame<'_>, area: Rect, m: Option<&Metrics>) {
f.render_widget(Block::default().borders(Borders::ALL).title("Disks"), area);
let Some(mm) = m else { return; };
let Some(mm) = m else {
return;
};
let inner = Rect {
x: area.x + 1,
@ -18,44 +20,88 @@ pub fn draw_disks(f: &mut ratatui::Frame<'_>, area: Rect, m: Option<&Metrics>) {
width: area.width.saturating_sub(2),
height: area.height.saturating_sub(2),
};
if inner.height < 3 { return; }
if inner.height < 3 {
return;
}
// Filter duplicates by keeping first occurrence of each unique name
let mut seen_names = std::collections::HashSet::new();
let unique_disks: Vec<_> = mm
.disks
.iter()
.filter(|d| seen_names.insert(d.name.clone()))
.collect();
let per_disk_h = 3u16;
let max_cards = (inner.height / per_disk_h).min(mm.disks.len() as u16) as usize;
let max_cards = (inner.height / per_disk_h).min(unique_disks.len() as u16) as usize;
let constraints: Vec<Constraint> = (0..max_cards).map(|_| Constraint::Length(per_disk_h)).collect();
let constraints: Vec<Constraint> = (0..max_cards)
.map(|_| Constraint::Length(per_disk_h))
.collect();
let rows = Layout::default()
.direction(Direction::Vertical)
.constraints(constraints)
.split(inner);
for (i, slot) in rows.iter().enumerate() {
let d = &mm.disks[i];
let d = unique_disks[i];
let used = d.total.saturating_sub(d.available);
let ratio = if d.total > 0 { used as f64 / d.total as f64 } else { 0.0 };
let ratio = if d.total > 0 {
used as f64 / d.total as f64
} else {
0.0
};
let pct = (ratio * 100.0).round() as u16;
let color = if pct < 70 { ratatui::style::Color::Green } else if pct < 90 { ratatui::style::Color::Yellow } else { ratatui::style::Color::Red };
let color = if pct < 70 {
ratatui::style::Color::Green
} else if pct < 90 {
ratatui::style::Color::Yellow
} else {
ratatui::style::Color::Red
};
// Add indentation for partitions
let indent = if d.is_partition { "└─" } else { "" };
// Add temperature if available
let temp_str = d
.temperature
.map(|t| format!(" {}°C", t.round() as i32))
.unwrap_or_default();
let title = format!(
"{} {} {} / {} ({}%)",
"{}{}{}{} {} / {} ({}%)",
indent,
disk_icon(&d.name),
truncate_middle(&d.name, (slot.width.saturating_sub(6)) as usize / 2),
temp_str,
human(used),
human(d.total),
pct
);
// Indent the entire card (block) for partitions to align with └─ prefix (4 chars)
let card_indent = if d.is_partition { 4 } else { 0 };
let card_rect = Rect {
x: slot.x + card_indent,
y: slot.y,
width: slot.width.saturating_sub(card_indent),
height: slot.height,
};
let card = Block::default().borders(Borders::ALL).title(title);
f.render_widget(card, *slot);
f.render_widget(card, card_rect);
let inner_card = Rect {
x: slot.x + 1,
y: slot.y + 1,
width: slot.width.saturating_sub(2),
height: slot.height.saturating_sub(2),
x: card_rect.x + 1,
y: card_rect.y + 1,
width: card_rect.width.saturating_sub(2),
height: card_rect.height.saturating_sub(2),
};
if inner_card.height == 0 { continue; }
if inner_card.height == 0 {
continue;
}
let gauge_rect = Rect {
x: inner_card.x,
@ -70,4 +116,4 @@ pub fn draw_disks(f: &mut ratatui::Frame<'_>, area: Rect, m: Option<&Metrics>) {
f.render_widget(g, gauge_rect);
}
}
}

123
socktop/src/ui/gpu.rs Normal file
View File

@ -0,0 +1,123 @@
use ratatui::{
layout::{Constraint, Direction, Layout, Rect},
style::{Color, Style},
text::Span,
widgets::{Block, Borders, Gauge, Paragraph},
};
use crate::types::Metrics;
fn fmt_bytes(b: u64) -> String {
const KB: f64 = 1024.0;
const MB: f64 = KB * 1024.0;
const GB: f64 = MB * 1024.0;
let fb = b as f64;
if fb >= GB {
format!("{:.1}G", fb / GB)
} else if fb >= MB {
format!("{:.1}M", fb / MB)
} else if fb >= KB {
format!("{:.1}K", fb / KB)
} else {
format!("{b}B")
}
}
pub fn draw_gpu(f: &mut ratatui::Frame<'_>, area: Rect, m: Option<&Metrics>) {
let mut area = area;
let block = Block::default().borders(Borders::ALL).title("GPU");
f.render_widget(block, area);
// Guard: need some space inside the block
if area.height <= 2 || area.width <= 2 {
return;
}
// Inner padding consistent with the rest of the app
area.y += 1;
area.height = area.height.saturating_sub(2);
area.x += 1;
area.width = area.width.saturating_sub(2);
let Some(metrics) = m else {
return;
};
let Some(gpus) = metrics.gpus.as_ref() else {
f.render_widget(Paragraph::new("No GPUs"), area);
return;
};
if gpus.is_empty() {
f.render_widget(Paragraph::new("No GPUs"), area);
return;
}
// Show 3 rows per GPU: name, util bar, vram bar.
if area.height < 3 {
return;
}
let per_gpu_rows: u16 = 3;
let max_gpus = (area.height / per_gpu_rows) as usize;
let count = gpus.len().min(max_gpus);
let constraints = vec![Constraint::Length(1); count * per_gpu_rows as usize];
let rows = Layout::default()
.direction(Direction::Vertical)
.constraints(constraints)
.split(area);
// Per bar horizontal layout: [gauge] [value]
let split_bar = |r: Rect| {
Layout::default()
.direction(Direction::Horizontal)
.constraints([
Constraint::Min(8), // gauge column
Constraint::Length(24), // value column
])
.split(r)
};
for i in 0..count {
let g = &gpus[i];
// Row 1: GPU name
let name_text = g.name.as_deref().unwrap_or("GPU");
let name_p = Paragraph::new(Span::raw(name_text)).style(Style::default().fg(Color::Gray));
f.render_widget(name_p, rows[i * 3]);
// Row 2: Utilization bar + right label
let util_cols = split_bar(rows[i * 3 + 1]);
let util = g.utilization.unwrap_or(0.0).clamp(0.0, 100.0) as u16;
let util_gauge = Gauge::default()
.gauge_style(Style::default().fg(Color::Green))
.label(Span::raw(""))
.ratio(util as f64 / 100.0);
f.render_widget(util_gauge, util_cols[0]);
f.render_widget(
Paragraph::new(Span::raw(format!("util: {util}%")))
.style(Style::default().fg(Color::Gray)),
util_cols[1],
);
// Row 3: VRAM bar + right label
let mem_cols = split_bar(rows[i * 3 + 2]);
let used = g.mem_used.unwrap_or(0);
let total = g.mem_total.unwrap_or(1);
let mem_ratio = used as f64 / total as f64;
let mem_pct = (mem_ratio * 100.0).round() as u16;
let mem_gauge = Gauge::default()
.gauge_style(Style::default().fg(Color::LightMagenta))
.label(Span::raw(""))
.ratio(mem_ratio);
f.render_widget(mem_gauge, mem_cols[0]);
let used_s = fmt_bytes(used);
let total_s = fmt_bytes(total);
f.render_widget(
Paragraph::new(Span::raw(format!("vram: {used_s}/{total_s} ({mem_pct}%)")))
.style(Style::default().fg(Color::Gray)),
mem_cols[1],
);
}
}

View File

@ -1,20 +1,55 @@
//! Top header with hostname and CPU temperature indicator.
use crate::types::Metrics;
use ratatui::{
layout::Rect,
widgets::{Block, Borders},
text::{Line, Span},
widgets::{Block, Borders, Paragraph},
};
use crate::types::Metrics;
use std::time::Duration;
pub fn draw_header(f: &mut ratatui::Frame<'_>, area: Rect, m: Option<&Metrics>) {
let title = if let Some(mm) = m {
let temp = mm.cpu_temp_c.map(|t| {
let icon = if t < 50.0 { "😎" } else if t < 85.0 { "⚠️" } else { "🔥" };
format!("CPU Temp: {:.1}°C {}", t, icon)
}).unwrap_or_else(|| "CPU Temp: N/A".into());
format!("socktop — host: {} | {} (press 'q' to quit)", mm.hostname, temp)
pub fn draw_header(
f: &mut ratatui::Frame<'_>,
area: Rect,
m: Option<&Metrics>,
is_tls: bool,
has_token: bool,
metrics_interval: Duration,
procs_interval: Duration,
) {
let base = if let Some(mm) = m {
format!("socktop — host: {}", mm.hostname)
} else {
"socktop — connecting... (press 'q' to quit)".into()
"socktop — connecting...".into()
};
// TLS indicator: lock vs lock with cross (using ✗). Keep explicit label for clarity.
let tls_txt = if is_tls { "🔒 TLS" } else { "🔒✗ TLS" };
// Token indicator
let tok_txt = if has_token { "🔑 token" } else { "" };
let mut parts = vec![base, tls_txt.into()];
if !tok_txt.is_empty() {
parts.push(tok_txt.into());
}
parts.push("(a: about, h: help, q: quit)".into());
let title = parts.join(" | ");
// Render the block with left-aligned title
f.render_widget(Block::default().title(title).borders(Borders::BOTTOM), area);
}
// Render polling intervals on the right side
let mi = metrics_interval.as_millis();
let pi = procs_interval.as_millis();
let intervals = format!("{mi}ms metrics | {pi}ms procs");
let intervals_width = intervals.len() as u16;
if area.width > intervals_width + 2 {
let right_area = Rect {
x: area.x + area.width.saturating_sub(intervals_width + 1),
y: area.y,
width: intervals_width,
height: 1,
};
let intervals_line = Line::from(Span::raw(intervals));
f.render_widget(Paragraph::new(intervals_line), right_area);
}
}

View File

@ -1,18 +1,24 @@
//! Memory gauge.
use crate::types::Metrics;
use crate::ui::util::human;
use ratatui::{
layout::Rect,
style::{Color, Style},
widgets::{Block, Borders, Gauge},
};
use crate::types::Metrics;
use crate::ui::util::human;
pub fn draw_mem(f: &mut ratatui::Frame<'_>, area: Rect, m: Option<&Metrics>) {
let (used, total, pct) = if let Some(mm) = m {
let pct = if mm.mem_total > 0 { (mm.mem_used as f64 / mm.mem_total as f64 * 100.0) as u16 } else { 0 };
let pct = if mm.mem_total > 0 {
(mm.mem_used as f64 / mm.mem_total as f64 * 100.0) as u16
} else {
0
};
(mm.mem_used, mm.mem_total, pct)
} else { (0, 0, 0) };
} else {
(0, 0, 0)
};
let g = Gauge::default()
.block(Block::default().borders(Borders::ALL).title("Memory"))
@ -20,4 +26,4 @@ pub fn draw_mem(f: &mut ratatui::Frame<'_>, area: Rect, m: Option<&Metrics>) {
.percent(pct)
.label(format!("{} / {}", human(used), human(total)));
f.render_widget(g, area);
}
}

View File

@ -1,10 +1,17 @@
//! UI module root: exposes drawing functions for individual panels.
pub mod header;
pub mod cpu;
pub mod mem;
pub mod swap;
pub mod disks;
pub mod gpu;
pub mod header;
pub mod mem;
pub mod modal;
pub mod modal_connection;
pub mod modal_format;
pub mod modal_process;
pub mod modal_types;
pub mod net;
pub mod processes;
pub mod util;
pub mod swap;
pub mod theme;
pub mod util;

634
socktop/src/ui/modal.rs Normal file
View File

@ -0,0 +1,634 @@
//! Modal window system for socktop TUI application
use super::theme::MODAL_DIM_BG;
use crossterm::event::KeyCode;
use ratatui::{
Frame,
layout::{Alignment, Constraint, Direction, Layout, Rect},
style::{Color, Modifier, Style},
text::Line,
widgets::{Block, Borders, Clear, Paragraph, Wrap},
};
// Re-export types from modal_types
pub use super::modal_types::{
ModalAction, ModalButton, ModalType, ProcessHistoryData, ProcessModalData,
};
#[derive(Debug)]
pub struct ModalManager {
stack: Vec<ModalType>,
pub(super) active_button: ModalButton,
pub thread_scroll_offset: usize,
pub journal_scroll_offset: usize,
pub thread_scroll_max: usize,
pub journal_scroll_max: usize,
pub help_scroll_offset: usize,
}
impl ModalManager {
pub fn new() -> Self {
Self {
stack: Vec::new(),
active_button: ModalButton::Retry,
thread_scroll_offset: 0,
journal_scroll_offset: 0,
thread_scroll_max: 0,
journal_scroll_max: 0,
help_scroll_offset: 0,
}
}
pub fn is_active(&self) -> bool {
!self.stack.is_empty()
}
pub fn current_modal(&self) -> Option<&ModalType> {
self.stack.last()
}
pub fn push_modal(&mut self, modal: ModalType) {
self.stack.push(modal);
self.active_button = match self.stack.last() {
Some(ModalType::ConnectionError { .. }) => ModalButton::Retry,
Some(ModalType::ProcessDetails { .. }) => {
// Reset scroll state for new process details
self.thread_scroll_offset = 0;
self.journal_scroll_offset = 0;
self.thread_scroll_max = 0;
self.journal_scroll_max = 0;
ModalButton::Ok
}
Some(ModalType::About) => ModalButton::Ok,
Some(ModalType::Help) => {
// Reset scroll state for help modal
self.help_scroll_offset = 0;
ModalButton::Ok
}
Some(ModalType::Confirmation { .. }) => ModalButton::Confirm,
Some(ModalType::Info { .. }) => ModalButton::Ok,
None => ModalButton::Ok,
};
}
pub fn pop_modal(&mut self) -> Option<ModalType> {
let m = self.stack.pop();
if let Some(next) = self.stack.last() {
self.active_button = match next {
ModalType::ConnectionError { .. } => ModalButton::Retry,
ModalType::ProcessDetails { .. } => ModalButton::Ok,
ModalType::About => ModalButton::Ok,
ModalType::Help => ModalButton::Ok,
ModalType::Confirmation { .. } => ModalButton::Confirm,
ModalType::Info { .. } => ModalButton::Ok,
};
}
m
}
pub fn update_connection_error_countdown(&mut self, new_countdown: Option<u64>) {
if let Some(ModalType::ConnectionError {
auto_retry_countdown,
..
}) = self.stack.last_mut()
{
*auto_retry_countdown = new_countdown;
}
}
pub fn handle_key(&mut self, key: KeyCode) -> ModalAction {
if !self.is_active() {
return ModalAction::None;
}
match key {
KeyCode::Esc => {
self.pop_modal();
ModalAction::Cancel
}
KeyCode::Enter => self.handle_enter(),
KeyCode::Tab | KeyCode::Right => {
self.next_button();
ModalAction::None
}
KeyCode::BackTab | KeyCode::Left => {
self.prev_button();
ModalAction::None
}
KeyCode::Char('r') | KeyCode::Char('R') => {
if matches!(self.stack.last(), Some(ModalType::ConnectionError { .. })) {
ModalAction::RetryConnection
} else {
ModalAction::None
}
}
KeyCode::Char('q') | KeyCode::Char('Q') => {
if matches!(self.stack.last(), Some(ModalType::ConnectionError { .. })) {
ModalAction::ExitApp
} else {
ModalAction::None
}
}
KeyCode::Char('x') | KeyCode::Char('X') => {
if matches!(self.stack.last(), Some(ModalType::ProcessDetails { .. })) {
// Close all ProcessDetails modals at once (handles parent navigation chain)
while matches!(self.stack.last(), Some(ModalType::ProcessDetails { .. })) {
self.pop_modal();
}
ModalAction::Dismiss
} else {
ModalAction::None
}
}
KeyCode::Char('j') | KeyCode::Char('J') => {
if matches!(self.stack.last(), Some(ModalType::ProcessDetails { .. })) {
self.thread_scroll_offset = self
.thread_scroll_offset
.saturating_add(1)
.min(self.thread_scroll_max);
ModalAction::Handled
} else {
ModalAction::None
}
}
KeyCode::Char('k') | KeyCode::Char('K') => {
if matches!(self.stack.last(), Some(ModalType::ProcessDetails { .. })) {
self.thread_scroll_offset = self.thread_scroll_offset.saturating_sub(1);
ModalAction::Handled
} else {
ModalAction::None
}
}
KeyCode::Char('d') | KeyCode::Char('D') => {
if matches!(self.stack.last(), Some(ModalType::ProcessDetails { .. })) {
self.thread_scroll_offset = self
.thread_scroll_offset
.saturating_add(10)
.min(self.thread_scroll_max);
ModalAction::Handled
} else {
ModalAction::None
}
}
KeyCode::Char('u') | KeyCode::Char('U') => {
if matches!(self.stack.last(), Some(ModalType::ProcessDetails { .. })) {
self.thread_scroll_offset = self.thread_scroll_offset.saturating_sub(10);
ModalAction::Handled
} else {
ModalAction::None
}
}
KeyCode::Char('[') => {
if matches!(self.stack.last(), Some(ModalType::ProcessDetails { .. })) {
self.journal_scroll_offset = self.journal_scroll_offset.saturating_sub(1);
ModalAction::Handled
} else {
ModalAction::None
}
}
KeyCode::Char(']') => {
if matches!(self.stack.last(), Some(ModalType::ProcessDetails { .. })) {
self.journal_scroll_offset = self
.journal_scroll_offset
.saturating_add(1)
.min(self.journal_scroll_max);
ModalAction::Handled
} else {
ModalAction::None
}
}
KeyCode::Char('p') | KeyCode::Char('P') => {
// Switch to parent process if it exists
if let Some(ModalType::ProcessDetails { pid }) = self.stack.last() {
// We need to get the parent PID from the process details
// For now, return a special action that the app can handle
// The app has access to the process details and can extract parent_pid
ModalAction::SwitchToParentProcess(*pid)
} else {
ModalAction::None
}
}
KeyCode::Up => {
if matches!(self.stack.last(), Some(ModalType::Help)) {
self.help_scroll_offset = self.help_scroll_offset.saturating_sub(1);
ModalAction::Handled
} else {
ModalAction::None
}
}
KeyCode::Down => {
if matches!(self.stack.last(), Some(ModalType::Help)) {
self.help_scroll_offset = self.help_scroll_offset.saturating_add(1);
ModalAction::Handled
} else {
ModalAction::None
}
}
_ => ModalAction::None,
}
}
fn handle_enter(&mut self) -> ModalAction {
match (&self.stack.last(), &self.active_button) {
(Some(ModalType::ConnectionError { .. }), ModalButton::Retry) => {
ModalAction::RetryConnection
}
(Some(ModalType::ConnectionError { .. }), ModalButton::Exit) => ModalAction::ExitApp,
(Some(ModalType::ProcessDetails { .. }), ModalButton::Ok) => {
self.pop_modal();
ModalAction::Dismiss
}
(Some(ModalType::About), ModalButton::Ok) => {
self.pop_modal();
ModalAction::Dismiss
}
(Some(ModalType::Help), ModalButton::Ok) => {
self.pop_modal();
ModalAction::Dismiss
}
(Some(ModalType::Confirmation { .. }), ModalButton::Confirm) => ModalAction::Confirm,
(Some(ModalType::Confirmation { .. }), ModalButton::Cancel) => ModalAction::Cancel,
(Some(ModalType::Info { .. }), ModalButton::Ok) => {
self.pop_modal();
ModalAction::Dismiss
}
_ => ModalAction::None,
}
}
fn next_button(&mut self) {
self.active_button = match (&self.stack.last(), &self.active_button) {
(Some(ModalType::ConnectionError { .. }), ModalButton::Retry) => ModalButton::Exit,
(Some(ModalType::ConnectionError { .. }), ModalButton::Exit) => ModalButton::Retry,
(Some(ModalType::Confirmation { .. }), ModalButton::Confirm) => ModalButton::Cancel,
(Some(ModalType::Confirmation { .. }), ModalButton::Cancel) => ModalButton::Confirm,
_ => self.active_button.clone(),
};
}
fn prev_button(&mut self) {
self.next_button();
}
pub fn render(&mut self, f: &mut Frame, data: ProcessModalData) {
if let Some(m) = self.stack.last().cloned() {
self.render_background_dim(f);
self.render_modal_content(f, &m, data);
}
}
fn render_background_dim(&self, f: &mut Frame) {
let area = f.area();
f.render_widget(Clear, area);
f.render_widget(
Block::default()
.style(Style::default().bg(MODAL_DIM_BG).fg(MODAL_DIM_BG))
.borders(Borders::NONE),
area,
);
}
fn render_modal_content(&mut self, f: &mut Frame, modal: &ModalType, data: ProcessModalData) {
let area = f.area();
// Different sizes for different modal types
let modal_area = match modal {
ModalType::ProcessDetails { .. } => {
// Process details modal uses almost full screen (95% width, 90% height)
self.centered_rect(95, 90, area)
}
ModalType::About => {
// About modal uses medium size
self.centered_rect(90, 90, area)
}
ModalType::Help => {
// Help modal uses medium size
self.centered_rect(70, 80, area)
}
_ => {
// Other modals use smaller size
self.centered_rect(70, 50, area)
}
};
f.render_widget(Clear, modal_area);
match modal {
ModalType::ConnectionError {
message,
disconnected_at,
retry_count,
auto_retry_countdown,
} => self.render_connection_error(
f,
modal_area,
message,
*disconnected_at,
*retry_count,
*auto_retry_countdown,
),
ModalType::ProcessDetails { pid } => {
self.render_process_details(f, modal_area, *pid, data)
}
ModalType::About => self.render_about(f, modal_area),
ModalType::Help => self.render_help(f, modal_area),
ModalType::Confirmation {
title,
message,
confirm_text,
cancel_text,
} => self.render_confirmation(f, modal_area, title, message, confirm_text, cancel_text),
ModalType::Info { title, message } => self.render_info(f, modal_area, title, message),
}
}
fn render_confirmation(
&self,
f: &mut Frame,
area: Rect,
title: &str,
message: &str,
confirm_text: &str,
cancel_text: &str,
) {
let chunks = Layout::default()
.direction(Direction::Vertical)
.constraints([Constraint::Min(1), Constraint::Length(3)])
.split(area);
let block = Block::default()
.title(format!(" {title} "))
.borders(Borders::ALL)
.style(Style::default().bg(Color::Black));
f.render_widget(block, area);
f.render_widget(
Paragraph::new(message)
.style(Style::default().fg(Color::White))
.alignment(Alignment::Center)
.wrap(Wrap { trim: true }),
chunks[0],
);
let buttons = Layout::default()
.direction(Direction::Horizontal)
.constraints([Constraint::Percentage(50), Constraint::Percentage(50)])
.split(chunks[1]);
let confirm_style = if self.active_button == ModalButton::Confirm {
Style::default()
.bg(Color::Green)
.fg(Color::Black)
.add_modifier(Modifier::BOLD)
} else {
Style::default().fg(Color::Green)
};
let cancel_style = if self.active_button == ModalButton::Cancel {
Style::default()
.bg(Color::Red)
.fg(Color::Black)
.add_modifier(Modifier::BOLD)
} else {
Style::default().fg(Color::Red)
};
f.render_widget(
Paragraph::new(confirm_text)
.style(confirm_style)
.alignment(Alignment::Center),
buttons[0],
);
f.render_widget(
Paragraph::new(cancel_text)
.style(cancel_style)
.alignment(Alignment::Center),
buttons[1],
);
}
fn render_info(&self, f: &mut Frame, area: Rect, title: &str, message: &str) {
let chunks = Layout::default()
.direction(Direction::Vertical)
.constraints([Constraint::Min(1), Constraint::Length(3)])
.split(area);
let block = Block::default()
.title(format!(" {title} "))
.borders(Borders::ALL)
.style(Style::default().bg(Color::Black));
f.render_widget(block, area);
f.render_widget(
Paragraph::new(message)
.style(Style::default().fg(Color::White))
.alignment(Alignment::Center)
.wrap(Wrap { trim: true }),
chunks[0],
);
let ok_style = if self.active_button == ModalButton::Ok {
Style::default()
.bg(Color::Blue)
.fg(Color::White)
.add_modifier(Modifier::BOLD)
} else {
Style::default().fg(Color::Blue)
};
f.render_widget(
Paragraph::new("[ Enter ] OK")
.style(ok_style)
.alignment(Alignment::Center),
chunks[1],
);
}
fn render_about(&self, f: &mut Frame, area: Rect) {
//get ASCII art from a constant stored in theme.rs
use super::theme::ASCII_ART;
let version = env!("CARGO_PKG_VERSION");
let about_text = format!(
"{}\n\
Version {}\n\
\n\
A terminal first remote monitoring tool\n\
\n\
Website: https://socktop.io\n\
GitHub: https://github.com/jasonwitty/socktop\n\
\n\
License: MIT License\n\
\n\
Created by Jason Witty\n\
jasonpwitty+socktop@proton.me",
ASCII_ART, version
);
// Render the border block
let block = Block::default()
.title(" About socktop ")
.borders(Borders::ALL)
.style(Style::default().bg(Color::Black).fg(Color::DarkGray));
f.render_widget(block, area);
// Calculate inner area manually to avoid any parent styling
let inner_area = Rect {
x: area.x + 1,
y: area.y + 1,
width: area.width.saturating_sub(2),
height: area.height.saturating_sub(2), // Leave room for button at bottom
};
// Render content area with explicit black background
f.render_widget(
Paragraph::new(about_text)
.style(Style::default().fg(Color::Cyan).bg(Color::Black))
.alignment(Alignment::Center)
.wrap(Wrap { trim: false }),
inner_area,
);
// Button area
let button_area = Rect {
x: area.x + 1,
y: area.y + area.height.saturating_sub(2),
width: area.width.saturating_sub(2),
height: 1,
};
let ok_style = if self.active_button == ModalButton::Ok {
Style::default()
.bg(Color::Blue)
.fg(Color::White)
.add_modifier(Modifier::BOLD)
} else {
Style::default().fg(Color::Blue).bg(Color::Black)
};
f.render_widget(
Paragraph::new("[ Enter ] Close")
.style(ok_style)
.alignment(Alignment::Center),
button_area,
);
}
fn render_help(&self, f: &mut Frame, area: Rect) {
let help_lines = vec![
"GLOBAL",
" q/Q/Esc ........ Quit │ a/A ....... About │ h/H ....... Help",
"",
"PROCESS LIST",
" / .............. Start/edit fuzzy search",
" c/C ............ Clear search filter",
" ↑/↓ ............ Select/navigate processes",
" Enter .......... Open Process Details",
" x/X ............ Clear selection",
" Click header ... Sort by column (CPU/Mem)",
" Click row ...... Select process",
"",
"SEARCH MODE (after pressing /)",
" Type ........... Enter search query (fuzzy match)",
" ↑/↓ ............ Navigate results while typing",
" Esc ............ Cancel search and clear filter",
" Enter .......... Apply filter and select first result",
"",
"CPU PER-CORE",
" ←/→ ............ Scroll cores │ PgUp/PgDn ... Page up/down",
" Home/End ....... Jump to first/last core",
"",
"PROCESS DETAILS MODAL",
" x/X ............ Close modal (all parent modals)",
" p/P ............ Navigate to parent process",
" j/k ............ Scroll threads ↓/↑ (1 line)",
" d/u ............ Scroll threads ↓/↑ (10 lines)",
" [ / ] .......... Scroll journal ↑/↓",
" Esc/Enter ...... Close modal",
"",
"MODAL NAVIGATION",
" Tab/→ .......... Next button │ Shift+Tab/← ... Previous button",
" Enter .......... Confirm/OK │ Esc ............ Cancel/Close",
];
// Render the border block
let block = Block::default()
.title(" Hotkey Help (use ↑/↓ to scroll) ")
.borders(Borders::ALL)
.style(Style::default().bg(Color::Black).fg(Color::DarkGray));
f.render_widget(block, area);
// Split into content area and button area
let chunks = Layout::default()
.direction(Direction::Vertical)
.constraints([Constraint::Min(1), Constraint::Length(1)])
.split(Rect {
x: area.x + 1,
y: area.y + 1,
width: area.width.saturating_sub(2),
height: area.height.saturating_sub(2),
});
let content_area = chunks[0];
let button_area = chunks[1];
// Calculate visible window
let visible_height = content_area.height as usize;
let total_lines = help_lines.len();
let max_scroll = total_lines.saturating_sub(visible_height);
let scroll_offset = self.help_scroll_offset.min(max_scroll);
// Get visible lines
let visible_lines: Vec<Line> = help_lines
.iter()
.skip(scroll_offset)
.take(visible_height)
.map(|s| Line::from(*s))
.collect();
// Render scrollable content
f.render_widget(
Paragraph::new(visible_lines)
.style(Style::default().fg(Color::Cyan).bg(Color::Black))
.alignment(Alignment::Left),
content_area,
);
// Render scrollbar if needed
if total_lines > visible_height {
use ratatui::widgets::{Scrollbar, ScrollbarOrientation, ScrollbarState};
let scrollbar_area = Rect {
x: area.x + area.width.saturating_sub(2),
y: area.y + 1,
width: 1,
height: area.height.saturating_sub(2),
};
let mut scrollbar_state = ScrollbarState::new(max_scroll).position(scroll_offset);
let scrollbar = Scrollbar::new(ScrollbarOrientation::VerticalRight)
.begin_symbol(Some(""))
.end_symbol(Some(""))
.style(Style::default().fg(Color::DarkGray));
f.render_stateful_widget(scrollbar, scrollbar_area, &mut scrollbar_state);
}
// Button area
let ok_style = if self.active_button == ModalButton::Ok {
Style::default()
.bg(Color::Blue)
.fg(Color::White)
.add_modifier(Modifier::BOLD)
} else {
Style::default().fg(Color::Blue).bg(Color::Black)
};
f.render_widget(
Paragraph::new("[ Enter ] Close")
.style(ok_style)
.alignment(Alignment::Center),
button_area,
);
}
fn centered_rect(&self, percent_x: u16, percent_y: u16, r: Rect) -> Rect {
let vert = Layout::default()
.direction(Direction::Vertical)
.constraints([
Constraint::Percentage((100 - percent_y) / 2),
Constraint::Percentage(percent_y),
Constraint::Percentage((100 - percent_y) / 2),
])
.split(r);
Layout::default()
.direction(Direction::Horizontal)
.constraints([
Constraint::Percentage((100 - percent_x) / 2),
Constraint::Percentage(percent_x),
Constraint::Percentage((100 - percent_x) / 2),
])
.split(vert[1])[1]
}
}

View File

@ -0,0 +1,297 @@
//! Connection error modal rendering
use std::time::Instant;
use super::modal_format::format_duration;
use super::theme::{
BTN_EXIT_BG_ACTIVE, BTN_EXIT_FG_ACTIVE, BTN_EXIT_FG_INACTIVE, BTN_EXIT_TEXT,
BTN_RETRY_BG_ACTIVE, BTN_RETRY_FG_ACTIVE, BTN_RETRY_FG_INACTIVE, BTN_RETRY_TEXT, ICON_CLUSTER,
ICON_COUNTDOWN_LABEL, ICON_MESSAGE, ICON_OFFLINE_LABEL, ICON_RETRY_LABEL, ICON_WARNING_TITLE,
LARGE_ERROR_ICON, MODAL_AGENT_FG, MODAL_BG, MODAL_BORDER_FG, MODAL_COUNTDOWN_LABEL_FG,
MODAL_FG, MODAL_HINT_FG, MODAL_ICON_PINK, MODAL_OFFLINE_LABEL_FG, MODAL_RETRY_LABEL_FG,
MODAL_TITLE_FG,
};
use ratatui::{
Frame,
layout::{Alignment, Constraint, Direction, Layout, Rect},
style::{Color, Modifier, Style},
text::{Line, Span, Text},
widgets::{Block, Borders, Paragraph, Wrap},
};
use super::modal::{ModalButton, ModalManager};
impl ModalManager {
pub(super) fn render_connection_error(
&self,
f: &mut Frame,
area: Rect,
message: &str,
disconnected_at: Instant,
retry_count: u32,
auto_retry_countdown: Option<u64>,
) {
let duration_text = format_duration(disconnected_at.elapsed());
let chunks = Layout::default()
.direction(Direction::Vertical)
.constraints([
Constraint::Length(3),
Constraint::Min(4),
Constraint::Length(4),
])
.split(area);
let block = Block::default()
.title(ICON_WARNING_TITLE)
.title_style(
Style::default()
.fg(MODAL_TITLE_FG)
.add_modifier(Modifier::BOLD),
)
.borders(Borders::ALL)
.border_style(Style::default().fg(MODAL_BORDER_FG))
.style(Style::default().bg(MODAL_BG).fg(MODAL_FG));
f.render_widget(block, area);
let content_area = chunks[1];
let max_w = content_area.width.saturating_sub(15) as usize;
let clean_message = if message.to_lowercase().contains("hostname verification")
|| message.contains("socktop_connector")
{
"Connection failed - hostname verification disabled".to_string()
} else if message.contains("Failed to fetch metrics:") {
if let Some(p) = message.find(':') {
let ess = message[p + 1..].trim();
if ess.len() > max_w {
format!("{}...", &ess[..max_w.saturating_sub(3)])
} else {
ess.to_string()
}
} else {
"Connection error".to_string()
}
} else if message.starts_with("Retry failed:") {
if let Some(p) = message.find(':') {
let ess = message[p + 1..].trim();
if ess.len() > max_w {
format!("{}...", &ess[..max_w.saturating_sub(3)])
} else {
ess.to_string()
}
} else {
"Retry failed".to_string()
}
} else if message.len() > max_w {
format!("{}...", &message[..max_w.saturating_sub(3)])
} else {
message.to_string()
};
let truncate = |s: &str| {
if s.len() > max_w {
format!("{}...", &s[..max_w.saturating_sub(3)])
} else {
s.to_string()
}
};
let agent_text = truncate("📡 Cannot connect to socktop agent");
let message_text = truncate(&clean_message);
let duration_display = truncate(&duration_text);
let retry_display = truncate(&retry_count.to_string());
let countdown_text = auto_retry_countdown.map(|c| {
if c == 0 {
"Auto retry now...".to_string()
} else {
format!("{c}s")
}
});
// Determine if we have enough space (height + width) to show large centered icon
let icon_max_width = LARGE_ERROR_ICON
.iter()
.map(|l| l.trim().chars().count())
.max()
.unwrap_or(0) as u16;
let large_allowed = content_area.height >= (LARGE_ERROR_ICON.len() as u16 + 8)
&& content_area.width >= icon_max_width + 6; // small margin for borders/padding
let mut icon_lines: Vec<Line> = Vec::new();
if large_allowed {
for &raw in LARGE_ERROR_ICON.iter() {
let trimmed = raw.trim();
icon_lines.push(Line::from(
trimmed
.chars()
.map(|ch| {
if ch == '!' {
Span::styled(
ch.to_string(),
Style::default()
.fg(Color::White)
.add_modifier(Modifier::BOLD),
)
} else if ch == '/' || ch == '\\' || ch == '_' {
// keep outline in pink
Span::styled(
ch.to_string(),
Style::default()
.fg(MODAL_ICON_PINK)
.add_modifier(Modifier::BOLD),
)
} else if ch == ' ' {
Span::raw(" ")
} else {
Span::styled(ch.to_string(), Style::default().fg(MODAL_ICON_PINK))
}
})
.collect::<Vec<_>>(),
));
}
icon_lines.push(Line::from("")); // blank spacer line below icon
}
let mut info_lines: Vec<Line> = Vec::new();
if !large_allowed {
info_lines.push(Line::from(vec![Span::styled(
ICON_CLUSTER,
Style::default().fg(MODAL_ICON_PINK),
)]));
info_lines.push(Line::from(""));
}
info_lines.push(Line::from(vec![Span::styled(
&agent_text,
Style::default().fg(MODAL_AGENT_FG),
)]));
info_lines.push(Line::from(""));
info_lines.push(Line::from(vec![
Span::styled(ICON_MESSAGE, Style::default().fg(MODAL_HINT_FG)),
Span::styled(&message_text, Style::default().fg(MODAL_AGENT_FG)),
]));
info_lines.push(Line::from(""));
info_lines.push(Line::from(vec![
Span::styled(
ICON_OFFLINE_LABEL,
Style::default().fg(MODAL_OFFLINE_LABEL_FG),
),
Span::styled(
&duration_display,
Style::default()
.fg(Color::White)
.add_modifier(Modifier::BOLD),
),
]));
info_lines.push(Line::from(vec![
Span::styled(ICON_RETRY_LABEL, Style::default().fg(MODAL_RETRY_LABEL_FG)),
Span::styled(
&retry_display,
Style::default()
.fg(Color::White)
.add_modifier(Modifier::BOLD),
),
]));
if let Some(cd) = &countdown_text {
info_lines.push(Line::from(vec![
Span::styled(
ICON_COUNTDOWN_LABEL,
Style::default().fg(MODAL_COUNTDOWN_LABEL_FG),
),
Span::styled(
cd,
Style::default()
.fg(Color::White)
.add_modifier(Modifier::BOLD),
),
]));
}
let constrained = Rect {
x: content_area.x + 2,
y: content_area.y,
width: content_area.width.saturating_sub(4),
height: content_area.height,
};
if large_allowed {
let split = Layout::default()
.direction(Direction::Vertical)
.constraints([
Constraint::Length(icon_lines.len() as u16),
Constraint::Min(0),
])
.split(constrained);
// Center the icon block; each line already trimmed so per-line centering keeps shape
f.render_widget(
Paragraph::new(Text::from(icon_lines))
.alignment(Alignment::Center)
.wrap(Wrap { trim: false }),
split[0],
);
f.render_widget(
Paragraph::new(Text::from(info_lines))
.alignment(Alignment::Center)
.wrap(Wrap { trim: true }),
split[1],
);
} else {
f.render_widget(
Paragraph::new(Text::from(info_lines))
.alignment(Alignment::Center)
.wrap(Wrap { trim: true }),
constrained,
);
}
let button_area = Rect {
x: chunks[2].x,
y: chunks[2].y,
width: chunks[2].width,
height: chunks[2].height.saturating_sub(1),
};
self.render_connection_error_buttons(f, button_area);
}
fn render_connection_error_buttons(&self, f: &mut Frame, area: Rect) {
let button_chunks = Layout::default()
.direction(Direction::Horizontal)
.constraints([
Constraint::Percentage(30),
Constraint::Percentage(15),
Constraint::Percentage(10),
Constraint::Percentage(15),
Constraint::Percentage(30),
])
.split(area);
let retry_style = if self.active_button == ModalButton::Retry {
Style::default()
.bg(BTN_RETRY_BG_ACTIVE)
.fg(BTN_RETRY_FG_ACTIVE)
.add_modifier(Modifier::BOLD)
} else {
Style::default()
.fg(BTN_RETRY_FG_INACTIVE)
.add_modifier(Modifier::DIM)
};
let exit_style = if self.active_button == ModalButton::Exit {
Style::default()
.bg(BTN_EXIT_BG_ACTIVE)
.fg(BTN_EXIT_FG_ACTIVE)
.add_modifier(Modifier::BOLD)
} else {
Style::default()
.fg(BTN_EXIT_FG_INACTIVE)
.add_modifier(Modifier::DIM)
};
f.render_widget(
Paragraph::new(Text::from(Line::from(vec![Span::styled(
BTN_RETRY_TEXT,
retry_style,
)])))
.alignment(Alignment::Center),
button_chunks[1],
);
f.render_widget(
Paragraph::new(Text::from(Line::from(vec![Span::styled(
BTN_EXIT_TEXT,
exit_style,
)])))
.alignment(Alignment::Center),
button_chunks[3],
);
}
}

View File

@ -0,0 +1,112 @@
//! Formatting utilities for process details modal
use std::time::Duration;
/// Format uptime in human-readable form
pub fn format_uptime(secs: u64) -> String {
let days = secs / 86400;
let hours = (secs % 86400) / 3600;
let minutes = (secs % 3600) / 60;
let seconds = secs % 60;
if days > 0 {
format!("{days}d {hours}h {minutes}m")
} else if hours > 0 {
format!("{hours}h {minutes}m {seconds}s")
} else if minutes > 0 {
format!("{minutes}m {seconds}s")
} else {
format!("{seconds}s")
}
}
/// Format duration in human-readable form
pub fn format_duration(duration: Duration) -> String {
let total = duration.as_secs();
let h = total / 3600;
let m = (total % 3600) / 60;
let s = total % 60;
if h > 0 {
format!("{h}h {m}m {s}s")
} else if m > 0 {
format!("{m}m {s}s")
} else {
format!("{s}s")
}
}
/// Normalize CPU usage to 0-100% by dividing by thread count
pub fn normalize_cpu_usage(cpu_usage: f32, thread_count: u32) -> f32 {
let threads = thread_count.max(1) as f32;
(cpu_usage / threads).min(100.0)
}
/// Calculate dynamic Y-axis maximum in 10% increments
pub fn calculate_dynamic_y_max(max_value: f64) -> f64 {
((max_value / 10.0).ceil() * 10.0).clamp(10.0, 100.0)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_format_uptime_seconds() {
assert_eq!(format_uptime(45), "45s");
}
#[test]
fn test_format_uptime_minutes() {
assert_eq!(format_uptime(125), "2m 5s");
}
#[test]
fn test_format_uptime_hours() {
assert_eq!(format_uptime(3665), "1h 1m 5s");
}
#[test]
fn test_format_uptime_days() {
assert_eq!(format_uptime(90061), "1d 1h 1m");
}
#[test]
fn test_normalize_cpu_single_thread() {
assert_eq!(normalize_cpu_usage(50.0, 1), 50.0);
}
#[test]
fn test_normalize_cpu_multi_thread() {
assert_eq!(normalize_cpu_usage(400.0, 4), 100.0);
}
#[test]
fn test_normalize_cpu_zero_threads() {
// Should default to 1 thread to avoid division by zero
assert_eq!(normalize_cpu_usage(100.0, 0), 100.0);
}
#[test]
fn test_normalize_cpu_caps_at_100() {
assert_eq!(normalize_cpu_usage(150.0, 1), 100.0);
}
#[test]
fn test_dynamic_y_max_rounds_up() {
assert_eq!(calculate_dynamic_y_max(15.0), 20.0);
assert_eq!(calculate_dynamic_y_max(25.0), 30.0);
assert_eq!(calculate_dynamic_y_max(5.0), 10.0);
}
#[test]
fn test_dynamic_y_max_minimum() {
assert_eq!(calculate_dynamic_y_max(0.0), 10.0);
assert_eq!(calculate_dynamic_y_max(3.0), 10.0);
}
#[test]
fn test_dynamic_y_max_caps_at_100() {
assert_eq!(calculate_dynamic_y_max(95.0), 100.0);
assert_eq!(calculate_dynamic_y_max(100.0), 100.0);
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,77 @@
//! Type definitions for modal system
use std::time::Instant;
/// History data for process metrics rendering
pub struct ProcessHistoryData<'a> {
pub cpu: &'a std::collections::VecDeque<f32>,
pub mem: &'a std::collections::VecDeque<u64>,
pub io_read: &'a std::collections::VecDeque<u64>,
pub io_write: &'a std::collections::VecDeque<u64>,
}
/// Process data for modal rendering
pub struct ProcessModalData<'a> {
pub details: Option<&'a socktop_connector::ProcessMetricsResponse>,
pub journal: Option<&'a socktop_connector::JournalResponse>,
pub history: ProcessHistoryData<'a>,
pub max_mem_bytes: u64,
pub unsupported: bool,
}
/// Parameters for rendering scatter plot
pub(super) struct ScatterPlotParams<'a> {
pub process: &'a socktop_connector::DetailedProcessInfo,
pub main_user_ms: f64,
pub main_system_ms: f64,
pub max_user: f64,
pub max_system: f64,
}
#[derive(Debug, Clone)]
pub enum ModalType {
ConnectionError {
message: String,
disconnected_at: Instant,
retry_count: u32,
auto_retry_countdown: Option<u64>,
},
ProcessDetails {
pid: u32,
},
About,
Help,
#[allow(dead_code)]
Confirmation {
title: String,
message: String,
confirm_text: String,
cancel_text: String,
},
#[allow(dead_code)]
Info {
title: String,
message: String,
},
}
#[derive(Debug, Clone, PartialEq)]
pub enum ModalAction {
None, // Modal didn't handle the key, pass to main window
Handled, // Modal handled the key, don't pass to main window
RetryConnection,
ExitApp,
Confirm,
Cancel,
Dismiss,
SwitchToParentProcess(u32), // Switch to viewing parent process details
}
#[derive(Debug, Clone, PartialEq)]
pub enum ModalButton {
Retry,
Exit,
Confirm,
Cancel,
Ok,
}

View File

@ -1,11 +1,11 @@
//! Network sparklines (download/upload).
use std::collections::VecDeque;
use ratatui::{
layout::Rect,
style::{Color, Style},
widgets::{Block, Borders, Sparkline},
};
use std::collections::VecDeque;
pub fn draw_net_spark(
f: &mut ratatui::Frame<'_>,
@ -19,8 +19,12 @@ pub fn draw_net_spark(
let data: Vec<u64> = hist.iter().skip(start).cloned().collect();
let spark = Sparkline::default()
.block(Block::default().borders(Borders::ALL).title(title.to_string()))
.block(
Block::default()
.borders(Borders::ALL)
.title(title.to_string()),
)
.data(&data)
.style(Style::default().fg(color));
f.render_widget(spark, area);
}
}

View File

@ -1,29 +1,189 @@
//! Top processes table with per-cell coloring and zebra striping.
//! Top processes table with per-cell coloring, zebra striping, sorting, and a scrollbar.
use ratatui::{
layout::{Constraint, Rect},
style::{Color, Style},
widgets::{Block, Borders, Cell, Row, Table},
};
use crossterm::event::{MouseButton, MouseEvent, MouseEventKind};
use ratatui::style::Modifier;
use ratatui::{
layout::{Constraint, Direction, Layout, Rect},
style::{Color, Style},
text::{Line, Span},
widgets::{Block, Borders, Paragraph, Table},
};
use std::cmp::Ordering;
use crate::types::Metrics;
use crate::ui::cpu::{per_core_clamp, per_core_handle_scrollbar_mouse};
use crate::ui::theme::{
PROCESS_SELECTION_BG, PROCESS_SELECTION_FG, PROCESS_TOOLTIP_BG, PROCESS_TOOLTIP_FG, SB_ARROW,
SB_THUMB, SB_TRACK,
};
use crate::ui::util::human;
pub fn draw_top_processes(f: &mut ratatui::Frame<'_>, area: Rect, m: Option<&Metrics>) {
let Some(mm) = m else {
f.render_widget(Block::default().borders(Borders::ALL).title("Top Processes"), area);
return;
/// Simple fuzzy matching: returns true if all characters in needle appear in haystack in order (case-insensitive)
fn fuzzy_match(haystack: &str, needle: &str) -> bool {
if needle.is_empty() {
return true;
}
let haystack_lower = haystack.to_lowercase();
let needle_lower = needle.to_lowercase();
let mut haystack_chars = haystack_lower.chars();
for needle_char in needle_lower.chars() {
if !haystack_chars.any(|c| c == needle_char) {
return false;
}
}
true
}
/// Get filtered and sorted process indices based on search query and sort order
pub fn get_filtered_sorted_indices(
metrics: &Metrics,
search_query: &str,
sort_by: ProcSortBy,
) -> Vec<usize> {
// Filter processes by search query (fuzzy match)
let mut filtered_idxs: Vec<usize> = if search_query.is_empty() {
(0..metrics.top_processes.len()).collect()
} else {
(0..metrics.top_processes.len())
.filter(|&i| fuzzy_match(&metrics.top_processes[i].name, search_query))
.collect()
};
let total_mem_bytes = mm.mem_total.max(1);
let title = format!("Top Processes ({} total)", mm.process_count);
let peak_cpu = mm.top_processes.iter().map(|p| p.cpu_usage).fold(0.0_f32, f32::max);
// Sort filtered rows
match sort_by {
ProcSortBy::CpuDesc => filtered_idxs.sort_by(|&a, &b| {
let aa = metrics.top_processes[a].cpu_usage;
let bb = metrics.top_processes[b].cpu_usage;
bb.partial_cmp(&aa).unwrap_or(Ordering::Equal)
}),
ProcSortBy::MemDesc => filtered_idxs.sort_by(|&a, &b| {
let aa = metrics.top_processes[a].mem_bytes;
let bb = metrics.top_processes[b].mem_bytes;
bb.cmp(&aa)
}),
}
let rows: Vec<Row> = mm.top_processes.iter().enumerate().map(|(i, p)| {
filtered_idxs
}
/// Parameters for drawing the top processes table
pub struct ProcessDisplayParams<'a> {
pub metrics: Option<&'a Metrics>,
pub scroll_offset: usize,
pub sort_by: ProcSortBy,
pub selected_process_pid: Option<u32>,
pub selected_process_index: Option<usize>,
pub search_query: &'a str,
pub search_active: bool,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
pub enum ProcSortBy {
#[default]
CpuDesc,
MemDesc,
}
// Keep the original header widths here so drawing and hit-testing match.
const COLS: [Constraint; 5] = [
Constraint::Length(8), // PID
Constraint::Percentage(40), // Name
Constraint::Length(8), // CPU %
Constraint::Length(12), // Mem
Constraint::Length(8), // Mem %
];
pub fn draw_top_processes(f: &mut ratatui::Frame<'_>, area: Rect, params: ProcessDisplayParams) {
// Draw outer block and title
let Some(mm) = params.metrics else { return };
let total = mm.process_count.unwrap_or(mm.top_processes.len());
let block = Block::default()
.borders(Borders::ALL)
.title(format!("Top Processes ({total} total)"));
f.render_widget(block, area);
// Inner area (reserve space for search box if active)
let inner = Rect {
x: area.x + 1,
y: area.y + 1,
width: area.width.saturating_sub(2),
height: area.height.saturating_sub(2),
};
// Draw search box if active
let content_start_y = if params.search_active || !params.search_query.is_empty() {
let search_area = Rect {
x: inner.x,
y: inner.y,
width: inner.width,
height: 3, // Height for border + content
};
let search_text = if params.search_active {
format!("Search: {}_", params.search_query)
} else {
format!(
"Filter: {} (press / to edit, c to clear)",
params.search_query
)
};
let search_block = Block::default()
.borders(Borders::ALL)
.border_style(Style::default().fg(Color::Yellow));
let search_paragraph = Paragraph::new(search_text)
.block(search_block)
.style(Style::default().fg(Color::Yellow));
f.render_widget(search_paragraph, search_area);
inner.y + 3
} else {
inner.y
};
// Content area (reserve 2 columns for scrollbar)
let inner = Rect {
x: inner.x,
y: content_start_y,
width: inner.width,
height: inner.height.saturating_sub(content_start_y - (area.y + 1)),
};
if inner.height < 1 || inner.width < 3 {
return;
}
let content = Rect {
x: inner.x,
y: inner.y,
width: inner.width.saturating_sub(2),
height: inner.height,
};
// Get filtered and sorted indices
let idxs = get_filtered_sorted_indices(mm, params.search_query, params.sort_by);
// Scrolling
let total_rows = idxs.len();
let header_rows = 1usize;
let viewport_rows = content.height.saturating_sub(header_rows as u16) as usize;
let max_off = total_rows.saturating_sub(viewport_rows);
let offset = params.scroll_offset.min(max_off);
let show_n = total_rows.saturating_sub(offset).min(viewport_rows);
// Build visible rows
let total_mem_bytes = mm.mem_total.max(1);
let peak_cpu = mm
.top_processes
.iter()
.map(|p| p.cpu_usage)
.fold(0.0_f32, f32::max);
let rows_iter = idxs.iter().skip(offset).take(show_n).map(|&ix| {
let p = &mm.top_processes[ix];
let mem_pct = (p.mem_bytes as f64 / total_mem_bytes as f64) * 100.0;
let cpu_fg = match p.cpu_usage {
let cpu_val = p.cpu_usage;
let cpu_fg = match cpu_val {
x if x < 25.0 => Color::Green,
x if x < 60.0 => Color::Yellow,
_ => Color::Red,
@ -34,38 +194,445 @@ pub fn draw_top_processes(f: &mut ratatui::Frame<'_>, area: Rect, m: Option<&Met
_ => Color::Red,
};
let zebra = if i % 2 == 0 { Style::default().fg(Color::Gray) } else { Style::default() };
let emphasis = if (p.cpu_usage - peak_cpu).abs() < f32::EPSILON {
let mut emphasis = if (cpu_val - peak_cpu).abs() < f32::EPSILON {
Style::default().add_modifier(Modifier::BOLD)
} else { Style::default() };
} else {
Style::default()
};
Row::new(vec![
Cell::from(p.pid.to_string()).style(Style::default().fg(Color::DarkGray)),
Cell::from(p.name.clone()),
Cell::from(format!("{:.1}%", p.cpu_usage)).style(Style::default().fg(cpu_fg)),
Cell::from(human(p.mem_bytes)),
Cell::from(format!("{:.2}%", mem_pct)).style(Style::default().fg(mem_fg)),
// Check if this process is selected - prioritize PID matching
let is_selected = if let Some(selected_pid) = params.selected_process_pid {
selected_pid == p.pid
} else if let Some(selected_idx) = params.selected_process_index {
selected_idx == ix // ix is the absolute index in the sorted list
} else {
false
};
// Apply selection highlighting
if is_selected {
emphasis = emphasis
.bg(PROCESS_SELECTION_BG)
.fg(PROCESS_SELECTION_FG)
.add_modifier(Modifier::BOLD);
}
let cpu_str = fmt_cpu_pct(cpu_val);
ratatui::widgets::Row::new(vec![
ratatui::widgets::Cell::from(p.pid.to_string())
.style(Style::default().fg(Color::DarkGray)),
ratatui::widgets::Cell::from(p.name.clone()),
ratatui::widgets::Cell::from(cpu_str).style(Style::default().fg(cpu_fg)),
ratatui::widgets::Cell::from(human(p.mem_bytes)),
ratatui::widgets::Cell::from(format!("{mem_pct:.2}%"))
.style(Style::default().fg(mem_fg)),
])
.style(zebra.patch(emphasis))
}).collect();
.style(emphasis)
});
let header = Row::new(vec!["PID", "Name", "CPU %", "Mem", "Mem %"])
.style(Style::default().fg(Color::Cyan).add_modifier(Modifier::BOLD));
// Header with sort indicator
let cpu_hdr = match params.sort_by {
ProcSortBy::CpuDesc => "CPU % •",
_ => "CPU %",
};
let mem_hdr = match params.sort_by {
ProcSortBy::MemDesc => "Mem •",
_ => "Mem",
};
let header = ratatui::widgets::Row::new(vec!["PID", "Name", cpu_hdr, mem_hdr, "Mem %"]).style(
Style::default()
.fg(Color::Cyan)
.add_modifier(Modifier::BOLD),
);
let table = Table::new(
rows,
vec![
Constraint::Length(8),
Constraint::Percentage(40),
Constraint::Length(8),
Constraint::Length(12),
Constraint::Length(8),
],
)
// Render table inside content area (no borders here; outer block already drawn)
let table = Table::new(rows_iter, COLS.to_vec())
.header(header)
.column_spacing(1)
.block(Block::default().borders(Borders::ALL).title(title));
.column_spacing(1);
f.render_widget(table, content);
f.render_widget(table, area);
}
// Draw tooltip if a process is selected
if let Some(selected_pid) = params.selected_process_pid {
// Find the selected process to get its name
let process_info = if let Some(metrics) = params.metrics {
metrics
.top_processes
.iter()
.find(|p| p.pid == selected_pid)
.map(|p| format!("PID {}{}", p.pid, p.name))
.unwrap_or_else(|| format!("PID {selected_pid}"))
} else {
format!("PID {selected_pid}")
};
let tooltip_text = format!("{process_info} | Enter for details • X to unselect");
let tooltip_width = tooltip_text.len() as u16 + 2; // Add padding
let tooltip_height = 3;
// Position tooltip at bottom-right of the processes area
if area.width > tooltip_width + 2 && area.height > tooltip_height + 1 {
let tooltip_area = Rect {
x: area.x + area.width.saturating_sub(tooltip_width + 1),
y: area.y + area.height.saturating_sub(tooltip_height + 1),
width: tooltip_width,
height: tooltip_height,
};
let tooltip_block = Block::default().borders(Borders::ALL).style(
Style::default()
.bg(PROCESS_TOOLTIP_BG)
.fg(PROCESS_TOOLTIP_FG),
);
let tooltip_paragraph = Paragraph::new(tooltip_text)
.block(tooltip_block)
.wrap(ratatui::widgets::Wrap { trim: true });
f.render_widget(tooltip_paragraph, tooltip_area);
}
}
// Draw scrollbar like CPU pane
let scroll_area = Rect {
x: inner.x + inner.width.saturating_sub(1),
y: inner.y,
width: 1,
height: inner.height,
};
if scroll_area.height >= 3 {
let track = (scroll_area.height - 2) as usize;
let total = total_rows.max(1);
let view = viewport_rows.clamp(1, total);
let max_off = total.saturating_sub(view);
let thumb_len = (track * view).div_ceil(total).max(1).min(track);
let thumb_top = if max_off == 0 {
0
} else {
((track - thumb_len) * offset + max_off / 2) / max_off
};
// Build lines: top arrow, track (with thumb), bottom arrow
let mut lines: Vec<Line> = Vec::with_capacity(scroll_area.height as usize);
lines.push(Line::from(Span::styled("", Style::default().fg(SB_ARROW))));
for i in 0..track {
if i >= thumb_top && i < thumb_top + thumb_len {
lines.push(Line::from(Span::styled("", Style::default().fg(SB_THUMB))));
} else {
lines.push(Line::from(Span::styled("", Style::default().fg(SB_TRACK))));
}
}
lines.push(Line::from(Span::styled("", Style::default().fg(SB_ARROW))));
f.render_widget(Paragraph::new(lines), scroll_area);
}
}
fn fmt_cpu_pct(v: f32) -> String {
format!("{:>5.1}", v.clamp(0.0, 100.0))
}
/// Handle keyboard scrolling (Up/Down/PageUp/PageDown/Home/End)
/// Parameters for process key event handling
pub struct ProcessKeyParams<'a> {
pub selected_process_pid: &'a mut Option<u32>,
pub selected_process_index: &'a mut Option<usize>,
pub key: crossterm::event::KeyEvent,
pub metrics: Option<&'a Metrics>,
pub sort_by: ProcSortBy,
pub search_query: &'a str,
}
/// LEGACY: Use processes_handle_key_with_selection for enhanced functionality
#[allow(dead_code)]
pub fn processes_handle_key(
scroll_offset: &mut usize,
key: crossterm::event::KeyEvent,
page_size: usize,
) {
crate::ui::cpu::per_core_handle_key(scroll_offset, key, page_size);
}
pub fn processes_handle_key_with_selection(params: ProcessKeyParams) -> bool {
use crossterm::event::KeyCode;
match params.key.code {
KeyCode::Up => {
// Navigate through filtered and sorted results
if let Some(m) = params.metrics {
let idxs = get_filtered_sorted_indices(m, params.search_query, params.sort_by);
if idxs.is_empty() {
// No filtered results, clear selection
*params.selected_process_index = None;
*params.selected_process_pid = None;
} else if params.selected_process_index.is_none()
|| params.selected_process_pid.is_none()
{
// No selection - select the first process in filtered/sorted order
let first_idx = idxs[0];
*params.selected_process_index = Some(first_idx);
*params.selected_process_pid = Some(m.top_processes[first_idx].pid);
} else if let Some(current_idx) = *params.selected_process_index {
// Find current position in filtered/sorted list
if let Some(pos) = idxs.iter().position(|&idx| idx == current_idx) {
if pos > 0 {
// Move up in filtered/sorted list
let new_idx = idxs[pos - 1];
*params.selected_process_index = Some(new_idx);
*params.selected_process_pid = Some(m.top_processes[new_idx].pid);
}
} else {
// Current selection not in filtered list, select first result
let first_idx = idxs[0];
*params.selected_process_index = Some(first_idx);
*params.selected_process_pid = Some(m.top_processes[first_idx].pid);
}
}
}
true // Handled
}
KeyCode::Down => {
// Navigate through filtered and sorted results
if let Some(m) = params.metrics {
let idxs = get_filtered_sorted_indices(m, params.search_query, params.sort_by);
if idxs.is_empty() {
// No filtered results, clear selection
*params.selected_process_index = None;
*params.selected_process_pid = None;
} else if params.selected_process_index.is_none()
|| params.selected_process_pid.is_none()
{
// No selection - select the first process in filtered/sorted order
let first_idx = idxs[0];
*params.selected_process_index = Some(first_idx);
*params.selected_process_pid = Some(m.top_processes[first_idx].pid);
} else if let Some(current_idx) = *params.selected_process_index {
// Find current position in filtered/sorted list
if let Some(pos) = idxs.iter().position(|&idx| idx == current_idx) {
if pos + 1 < idxs.len() {
// Move down in filtered/sorted list
let new_idx = idxs[pos + 1];
*params.selected_process_index = Some(new_idx);
*params.selected_process_pid = Some(m.top_processes[new_idx].pid);
}
} else {
// Current selection not in filtered list, select first result
let first_idx = idxs[0];
*params.selected_process_index = Some(first_idx);
*params.selected_process_pid = Some(m.top_processes[first_idx].pid);
}
}
}
true // Handled
}
KeyCode::Char('x') | KeyCode::Char('X') => {
// Unselect any selected process
if params.selected_process_pid.is_some() || params.selected_process_index.is_some() {
*params.selected_process_pid = None;
*params.selected_process_index = None;
true // Handled
} else {
false // No selection to clear
}
}
KeyCode::Enter => {
// Signal that Enter was pressed with a selection
params.selected_process_pid.is_some() // Return true if we have a selection to handle
}
_ => {
// No other keys handled - let scrollbar handle all navigation
false
}
}
}
/// Handle mouse for content scrolling and scrollbar dragging.
/// Returns Some(new_sort) if the header "CPU %" or "Mem" was clicked.
/// LEGACY: Use processes_handle_mouse_with_selection for enhanced functionality
#[allow(dead_code)]
pub fn processes_handle_mouse(
scroll_offset: &mut usize,
drag: &mut Option<crate::ui::cpu::PerCoreScrollDrag>,
mouse: MouseEvent,
area: Rect,
total_rows: usize,
) -> Option<ProcSortBy> {
// Inner and content areas (match draw_top_processes)
let inner = Rect {
x: area.x + 1,
y: area.y + 1,
width: area.width.saturating_sub(2),
height: area.height.saturating_sub(2),
};
if inner.height == 0 || inner.width <= 2 {
return None;
}
let content = Rect {
x: inner.x,
y: inner.y,
width: inner.width.saturating_sub(2),
height: inner.height,
};
// Scrollbar interactions (click arrows/page/drag)
per_core_handle_scrollbar_mouse(scroll_offset, drag, mouse, area, total_rows);
// Wheel scrolling when inside the content
crate::ui::cpu::per_core_handle_mouse(scroll_offset, mouse, content, content.height as usize);
// Header click to change sort
let header_area = Rect {
x: content.x,
y: content.y,
width: content.width,
height: 1,
};
let inside_header = mouse.row == header_area.y
&& mouse.column >= header_area.x
&& mouse.column < header_area.x + header_area.width;
if inside_header && matches!(mouse.kind, MouseEventKind::Down(MouseButton::Left)) {
// Split header into the same columns
let cols = Layout::default()
.direction(Direction::Horizontal)
.constraints(COLS.to_vec())
.split(header_area);
if mouse.column >= cols[2].x && mouse.column < cols[2].x + cols[2].width {
return Some(ProcSortBy::CpuDesc);
}
if mouse.column >= cols[3].x && mouse.column < cols[3].x + cols[3].width {
return Some(ProcSortBy::MemDesc);
}
}
// Clamp to valid range
per_core_clamp(
scroll_offset,
total_rows,
(content.height.saturating_sub(1)) as usize,
);
None
}
/// Parameters for process mouse event handling
pub struct ProcessMouseParams<'a> {
pub scroll_offset: &'a mut usize,
pub selected_process_pid: &'a mut Option<u32>,
pub selected_process_index: &'a mut Option<usize>,
pub drag: &'a mut Option<crate::ui::cpu::PerCoreScrollDrag>,
pub mouse: MouseEvent,
pub area: Rect,
pub total_rows: usize,
pub metrics: Option<&'a Metrics>,
pub sort_by: ProcSortBy,
pub search_query: &'a str,
}
/// Enhanced mouse handler that also manages process selection
/// Returns Some(new_sort) if the header was clicked, or handles row selection
pub fn processes_handle_mouse_with_selection(params: ProcessMouseParams) -> Option<ProcSortBy> {
// Inner and content areas (match draw_top_processes)
let inner = Rect {
x: params.area.x + 1,
y: params.area.y + 1,
width: params.area.width.saturating_sub(2),
height: params.area.height.saturating_sub(2),
};
if inner.height == 0 || inner.width <= 2 {
return None;
}
// Calculate content area - must match draw_top_processes exactly!
// If search is active or query exists, content starts after search box (3 lines)
let search_active = !params.search_query.is_empty();
let content_start_y = if search_active { inner.y + 3 } else { inner.y };
let content = Rect {
x: inner.x,
y: content_start_y,
width: inner.width.saturating_sub(2),
height: inner
.height
.saturating_sub(if search_active { 3 } else { 0 }),
};
// Scrollbar interactions (click arrows/page/drag)
per_core_handle_scrollbar_mouse(
params.scroll_offset,
params.drag,
params.mouse,
params.area,
params.total_rows,
);
// Wheel scrolling when inside the content
crate::ui::cpu::per_core_handle_mouse(
params.scroll_offset,
params.mouse,
content,
content.height as usize,
);
// Header click to change sort
let header_area = Rect {
x: content.x,
y: content.y,
width: content.width,
height: 1,
};
let inside_header = params.mouse.row == header_area.y
&& params.mouse.column >= header_area.x
&& params.mouse.column < header_area.x + header_area.width;
if inside_header && matches!(params.mouse.kind, MouseEventKind::Down(MouseButton::Left)) {
// Split header into the same columns
let cols = Layout::default()
.direction(Direction::Horizontal)
.constraints(COLS.to_vec())
.split(header_area);
if params.mouse.column >= cols[2].x && params.mouse.column < cols[2].x + cols[2].width {
return Some(ProcSortBy::CpuDesc);
}
if params.mouse.column >= cols[3].x && params.mouse.column < cols[3].x + cols[3].width {
return Some(ProcSortBy::MemDesc);
}
}
// Row click for process selection
let data_start_row = content.y + 1; // Skip header
let data_area_height = content.height.saturating_sub(1); // Exclude header
if matches!(params.mouse.kind, MouseEventKind::Down(MouseButton::Left))
&& params.mouse.row >= data_start_row
&& params.mouse.row < data_start_row + data_area_height
&& params.mouse.column >= content.x
&& params.mouse.column < content.x + content.width
{
let clicked_row = (params.mouse.row - data_start_row) as usize;
// Find the actual process using the same filtering/sorting logic as the drawing code
if let Some(m) = params.metrics {
// Use the same filtered and sorted indices as display
let idxs = get_filtered_sorted_indices(m, params.search_query, params.sort_by);
// Calculate which process was actually clicked based on filtered/sorted order
let visible_process_position = *params.scroll_offset + clicked_row;
if visible_process_position < idxs.len() {
let actual_process_index = idxs[visible_process_position];
let clicked_process = &m.top_processes[actual_process_index];
*params.selected_process_pid = Some(clicked_process.pid);
*params.selected_process_index = Some(actual_process_index);
}
}
}
// Clamp to valid range
per_core_clamp(
params.scroll_offset,
params.total_rows,
(content.height.saturating_sub(1)) as usize,
);
None
}

View File

@ -1,18 +1,24 @@
//! Swap gauge.
use crate::types::Metrics;
use crate::ui::util::human;
use ratatui::{
layout::Rect,
style::{Color, Style},
widgets::{Block, Borders, Gauge},
};
use crate::types::Metrics;
use crate::ui::util::human;
pub fn draw_swap(f: &mut ratatui::Frame<'_>, area: Rect, m: Option<&Metrics>) {
let (used, total, pct) = if let Some(mm) = m {
let pct = if mm.swap_total > 0 { (mm.swap_used as f64 / mm.swap_total as f64 * 100.0) as u16 } else { 0 };
let pct = if mm.swap_total > 0 {
(mm.swap_used as f64 / mm.swap_total as f64 * 100.0) as u16
} else {
0
};
(mm.swap_used, mm.swap_total, pct)
} else { (0, 0, 0) };
} else {
(0, 0, 0)
};
let g = Gauge::default()
.block(Block::default().borders(Borders::ALL).title("Swap"))
@ -20,4 +26,4 @@ pub fn draw_swap(f: &mut ratatui::Frame<'_>, area: Rect, m: Option<&Metrics>) {
.percent(pct)
.label(format!("{} / {}", human(used), human(total)));
f.render_widget(g, area);
}
}

88
socktop/src/ui/theme.rs Normal file
View File

@ -0,0 +1,88 @@
//! Shared UI theme constants.
use ratatui::style::Color;
// Scrollbar colors (same look as before)
pub const SB_ARROW: Color = Color::Rgb(170, 170, 180);
pub const SB_TRACK: Color = Color::Rgb(170, 170, 180);
pub const SB_THUMB: Color = Color::Rgb(170, 170, 180);
// Modal palette
pub const MODAL_DIM_BG: Color = Color::Rgb(15, 15, 25);
pub const MODAL_BG: Color = Color::Rgb(26, 26, 46);
pub const MODAL_FG: Color = Color::Rgb(230, 230, 230);
pub const MODAL_TITLE_FG: Color = Color::Rgb(255, 102, 102); // soft red for title text
pub const MODAL_BORDER_FG: Color = Color::Rgb(204, 51, 51); // darker red border
pub const MODAL_ICON_PINK: Color = Color::Rgb(255, 182, 193); // light pink icons line
pub const MODAL_AGENT_FG: Color = Color::Rgb(220, 220, 255); // pale periwinkle
pub const MODAL_HINT_FG: Color = Color::Rgb(255, 215, 0); // gold for message icon
pub const MODAL_OFFLINE_LABEL_FG: Color = Color::Rgb(135, 206, 235); // sky blue label
pub const MODAL_RETRY_LABEL_FG: Color = Color::Rgb(255, 165, 0); // orange label
pub const MODAL_COUNTDOWN_LABEL_FG: Color = Color::Rgb(255, 192, 203); // pink label for countdown
// Buttons
pub const BTN_RETRY_BG_ACTIVE: Color = Color::Rgb(46, 204, 113); // modern green
pub const BTN_RETRY_FG_ACTIVE: Color = Color::Rgb(26, 26, 46);
pub const BTN_RETRY_FG_INACTIVE: Color = Color::Rgb(46, 204, 113);
pub const BTN_EXIT_BG_ACTIVE: Color = Color::Rgb(255, 255, 255); // modern red
pub const BTN_EXIT_FG_ACTIVE: Color = Color::Rgb(26, 26, 46);
pub const BTN_EXIT_FG_INACTIVE: Color = Color::Rgb(255, 255, 255);
// Process selection colors
pub const PROCESS_SELECTION_BG: Color = Color::Rgb(147, 112, 219); // Medium slate blue (purple)
pub const PROCESS_SELECTION_FG: Color = Color::Rgb(255, 255, 255); // White text for contrast
pub const PROCESS_TOOLTIP_BG: Color = Color::Rgb(147, 112, 219); // Same purple as selection
pub const PROCESS_TOOLTIP_FG: Color = Color::Rgb(255, 255, 255); // White text for contrast
// Process details modal colors (matches main UI aesthetic - no custom colors, terminal defaults)
pub const PROCESS_DETAILS_ACCENT: Color = Color::Rgb(147, 112, 219); // Purple accent for highlights
// Emoji / icon strings (centralized so they can be themed/swapped later)
pub const ICON_WARNING_TITLE: &str = " 🔌 CONNECTION ERROR ";
pub const ICON_CLUSTER: &str = "⚠️";
pub const ICON_MESSAGE: &str = "💭 ";
pub const ICON_OFFLINE_LABEL: &str = "⏱️ Offline for: ";
pub const ICON_RETRY_LABEL: &str = "🔄 Retry attempts: ";
pub const ICON_COUNTDOWN_LABEL: &str = "⏰ Next auto retry: ";
pub const BTN_RETRY_TEXT: &str = " 🔄 Retry ";
pub const BTN_EXIT_TEXT: &str = " ❌ Exit ";
// warning icon
pub const LARGE_ERROR_ICON: &[&str] = &[
" /\\ ",
" / \\ ",
" / !! \\ ",
" / !!!! \\ ",
" / !! \\ ",
" / !!!! \\ ",
" / !! \\ ",
" /______________\\ ",
];
//about logo
pub const ASCII_ART: &str = r#"
"#;

View File

@ -3,31 +3,49 @@
pub fn human(b: u64) -> String {
const K: f64 = 1024.0;
let b = b as f64;
if b < K { return format!("{b:.0}B"); }
if b < K {
return format!("{b:.0}B");
}
let kb = b / K;
if kb < K { return format!("{kb:.1}KB"); }
if kb < K {
return format!("{kb:.1}KB");
}
let mb = kb / K;
if mb < K { return format!("{mb:.1}MB"); }
if mb < K {
return format!("{mb:.1}MB");
}
let gb = mb / K;
if gb < K { return format!("{gb:.1}GB"); }
if gb < K {
return format!("{gb:.1}GB");
}
let tb = gb / K;
format!("{tb:.2}TB")
}
pub fn truncate_middle(s: &str, max: usize) -> String {
if s.len() <= max { return s.to_string(); }
if max <= 3 { return "...".into(); }
if s.len() <= max {
return s.to_string();
}
if max <= 3 {
return "...".into();
}
let keep = max - 3;
let left = keep / 2;
let right = keep - left;
format!("{}...{}", &s[..left], &s[s.len()-right..])
format!("{}...{}", &s[..left], &s[s.len() - right..])
}
pub fn disk_icon(name: &str) -> &'static str {
let n = name.to_ascii_lowercase();
if n.contains(':') { "🗄️" }
else if n.contains("nvme") { "" }
else if n.starts_with("sd") { "💽" }
else if n.contains("overlay") { "📦" }
else { "🖴" }
}
if n.contains(':') {
"🗄️"
} else if n.contains("nvme") {
""
} else if n.starts_with("sd") {
"💽"
} else if n.contains("overlay") {
"📦"
} else {
"🖴"
}
}

View File

@ -1,28 +0,0 @@
//! Minimal WebSocket client helpers for requesting metrics from the agent.
use tokio::net::TcpStream;
use tokio_tungstenite::{connect_async, tungstenite::Message, MaybeTlsStream, WebSocketStream};
use crate::types::Metrics;
pub type WsStream = WebSocketStream<MaybeTlsStream<TcpStream>>;
// Connect to the agent and return the WS stream
pub async fn connect(url: &str) -> Result<WsStream, Box<dyn std::error::Error>> {
let (ws, _) = connect_async(url).await?;
Ok(ws)
}
// Send a "get_metrics" request and await a single JSON reply
pub async fn request_metrics(ws: &mut WsStream) -> Option<Metrics> {
if ws.send(Message::Text("get_metrics".into())).await.is_err() {
return None;
}
match ws.next().await {
Some(Ok(Message::Text(json))) => serde_json::from_str::<Metrics>(&json).ok(),
_ => None,
}
}
// Re-export SinkExt/StreamExt for call sites
use futures_util::{SinkExt, StreamExt};

75
socktop/tests/cli_args.rs Normal file
View File

@ -0,0 +1,75 @@
//! CLI arg parsing tests for socktop (client)
use std::process::Command;
// We test the parsing by invoking the binary with --help and ensuring the help mentions short and long flags.
// Also directly test the parse_args function via a tiny helper in a doctest-like fashion using a small
// reimplementation here kept in sync with main (compile-time test).
#[test]
fn test_help_mentions_short_and_long_flags() {
let output = Command::new(env!("CARGO_BIN_EXE_socktop"))
.arg("--help")
.output()
.expect("run socktop --help");
let text = format!(
"{}{}",
String::from_utf8_lossy(&output.stdout),
String::from_utf8_lossy(&output.stderr)
);
assert!(
text.contains("--tls-ca")
&& text.contains("-t")
&& text.contains("--profile")
&& text.contains("-P"),
"help text missing expected flags (--tls-ca/-t, --profile/-P)\n{text}"
);
}
#[test]
fn test_tlc_ca_arg_long_and_short_parsed() {
// Use --help combined with flags to avoid network and still exercise arg acceptance
let exe = env!("CARGO_BIN_EXE_socktop");
// Long form with help
let out = Command::new(exe)
.args(["--tls-ca", "/tmp/cert.pem", "--help"])
.output()
.expect("run socktop");
assert!(
out.status.success(),
"socktop --tls-ca … --help did not succeed"
);
let text = format!(
"{}{}",
String::from_utf8_lossy(&out.stdout),
String::from_utf8_lossy(&out.stderr)
);
assert!(text.contains("Usage:"));
// Short form with help
let out2 = Command::new(exe)
.args(["-t", "/tmp/cert.pem", "--help"])
.output()
.expect("run socktop");
assert!(out2.status.success(), "socktop -t … --help did not succeed");
let text2 = format!(
"{}{}",
String::from_utf8_lossy(&out2.stdout),
String::from_utf8_lossy(&out2.stderr)
);
assert!(text2.contains("Usage:"));
// Profile flags with help (should not error)
let out3 = Command::new(exe)
.args(["--profile", "dev", "--help"])
.output()
.expect("run socktop");
assert!(
out3.status.success(),
"socktop --profile dev --help did not succeed"
);
let text3 = format!(
"{}{}",
String::from_utf8_lossy(&out3.stdout),
String::from_utf8_lossy(&out3.stderr)
);
assert!(text3.contains("Usage:"));
}

View File

@ -0,0 +1,46 @@
//! Tests for modal formatting and duration helper.
use std::time::Duration;
// Bring the format_duration function into scope by duplicating logic (private in module). If desired,
// this could be moved to a shared util module; for now we re-assert expected behavior.
fn format_duration_ref(duration: Duration) -> String {
let total_secs = duration.as_secs();
let hours = total_secs / 3600;
let minutes = (total_secs % 3600) / 60;
let seconds = total_secs % 60;
if hours > 0 {
format!("{hours}h {minutes}m {seconds}s")
} else if minutes > 0 {
format!("{minutes}m {seconds}s")
} else {
format!("{seconds}s")
}
}
#[test]
fn test_format_duration_boundaries() {
assert_eq!(format_duration_ref(Duration::from_secs(0)), "0s");
assert_eq!(format_duration_ref(Duration::from_secs(59)), "59s");
assert_eq!(format_duration_ref(Duration::from_secs(60)), "1m 0s");
assert_eq!(format_duration_ref(Duration::from_secs(61)), "1m 1s");
assert_eq!(format_duration_ref(Duration::from_secs(3600)), "1h 0m 0s");
assert_eq!(format_duration_ref(Duration::from_secs(3661)), "1h 1m 1s");
}
// Basic test to ensure auto-retry countdown semantics are consistent for initial state.
#[test]
fn test_auto_retry_initial_none() {
// We can't construct App directly without pulling in whole UI; just assert logic mimic.
// For a more thorough test, refactor countdown logic into a pure function.
// This placeholder asserts desired initial semantics: when no disconnect/original time, countdown should be None.
// (When integrated, consider exposing a pure helper returning Option<u64>.)
let modal_active = false; // requirement: must be active for countdown
let disconnected_state = true; // assume disconnected state
let countdown = if disconnected_state && modal_active {
// would compute target
Some(0)
} else {
None
};
assert!(countdown.is_none());
}

124
socktop/tests/profiles.rs Normal file
View File

@ -0,0 +1,124 @@
//! Tests for profile load/save and resolution logic (non-interactive paths only)
use std::fs;
use std::sync::Mutex;
// Global lock to serialize tests that mutate process-wide environment variables.
static ENV_LOCK: Mutex<()> = Mutex::new(());
#[allow(dead_code)] // touch crate
fn touch() {
let _ = socktop::types::Metrics {
cpu_total: 0.0,
cpu_per_core: vec![],
mem_total: 0,
mem_used: 0,
swap_total: 0,
swap_used: 0,
process_count: None,
hostname: String::new(),
cpu_temp_c: None,
disks: vec![],
networks: vec![],
top_processes: vec![],
gpus: None,
};
}
// We re-import internal modules by copying minimal logic here because profiles.rs isn't public.
// Instead of exposing internals, we simulate profile saving through CLI invocations.
use std::process::Command;
fn run_socktop(args: &[&str]) -> (bool, String) {
let exe = env!("CARGO_BIN_EXE_socktop");
let output = Command::new(exe).args(args).output().expect("run socktop");
let ok = output.status.success();
let text = format!(
"{}{}",
String::from_utf8_lossy(&output.stdout),
String::from_utf8_lossy(&output.stderr)
);
(ok, text)
}
fn config_dir() -> std::path::PathBuf {
if let Some(xdg) = std::env::var_os("XDG_CONFIG_HOME") {
std::path::PathBuf::from(xdg).join("socktop")
} else {
dirs_next::config_dir()
.unwrap_or_else(|| std::path::PathBuf::from("."))
.join("socktop")
}
}
fn profiles_path() -> std::path::PathBuf {
config_dir().join("profiles.json")
}
#[test]
fn test_profile_created_on_first_use() {
let _guard = ENV_LOCK.lock().unwrap();
// Isolate config in a temp dir
let td = tempfile::tempdir().unwrap();
unsafe {
std::env::set_var("XDG_CONFIG_HOME", td.path());
}
// Ensure directory exists fresh
std::fs::create_dir_all(td.path().join("socktop")).unwrap();
let _ = fs::remove_file(profiles_path());
// Provide profile + url => should create profiles.json
let (_ok, _out) = run_socktop(&["--profile", "unittest", "ws://example:1/ws", "--dry-run"]);
// We pass --help to exit early after parsing (no network attempt)
let data = fs::read_to_string(profiles_path()).expect("profiles.json created");
assert!(
data.contains("unittest"),
"profiles.json missing profile entry: {data}"
);
}
#[test]
fn test_profile_overwrite_only_when_changed() {
let _guard = ENV_LOCK.lock().unwrap();
let td = tempfile::tempdir().unwrap();
unsafe {
std::env::set_var("XDG_CONFIG_HOME", td.path());
}
std::fs::create_dir_all(td.path().join("socktop")).unwrap();
let _ = fs::remove_file(profiles_path());
// Initial create
let (_ok, _out) = run_socktop(&["--profile", "prod", "ws://one/ws", "--dry-run"]); // create
let first = fs::read_to_string(profiles_path()).unwrap();
// Re-run identical (should not duplicate or corrupt)
let (_ok2, _out2) = run_socktop(&["--profile", "prod", "ws://one/ws", "--dry-run"]); // identical
let second = fs::read_to_string(profiles_path()).unwrap();
assert_eq!(
first, second,
"Profile file changed despite identical input"
);
// Overwrite with different URL using --save (no prompt path)
let (_ok3, _out3) = run_socktop(&["--profile", "prod", "--save", "ws://two/ws", "--dry-run"]);
let third = fs::read_to_string(profiles_path()).unwrap();
assert!(third.contains("two"), "Updated URL not written: {third}");
}
#[test]
fn test_profile_tls_ca_persisted() {
let _guard = ENV_LOCK.lock().unwrap();
let td = tempfile::tempdir().unwrap();
unsafe {
std::env::set_var("XDG_CONFIG_HOME", td.path());
}
std::fs::create_dir_all(td.path().join("socktop")).unwrap();
let _ = fs::remove_file(profiles_path());
let (_ok, _out) = run_socktop(&[
"--profile",
"secureX",
"--tls-ca",
"/tmp/cert.pem",
"wss://host/ws",
"--dry-run",
]);
let data = fs::read_to_string(profiles_path()).unwrap();
assert!(data.contains("secureX"));
assert!(data.contains("cert.pem"));
}

View File

@ -1,17 +1,47 @@
[package]
name = "socktop_agent"
version = "0.1.0"
version = "1.50.2"
authors = ["Jason Witty <jasonpwitty+socktop@proton.me>"]
description = "Remote system monitor over WebSocket, TUI like top"
edition = "2021"
description = "Socktop agent daemon. Serves host metrics over WebSocket."
edition = "2024"
license = "MIT"
readme = "README.md"
[dependencies]
tokio = { version = "1", features = ["full"] }
# Tokio: Use minimal features instead of "full" to reduce binary size
# Only include: rt-multi-thread (async runtime), net (WebSocket), sync (Mutex/RwLock), macros (#[tokio::test])
# Excluded: io, fs, process, signal, time (not needed for this workload)
# Savings: ~200-300KB binary size, faster compile times
tokio = { version = "1", features = ["rt-multi-thread", "net", "sync", "macros"] }
axum = { version = "0.7", features = ["ws", "macros"] }
sysinfo = "0.36.1"
sysinfo = { version = "0.37", features = ["network", "disk", "component"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
futures = "0.3"
flate2 = { version = "1", default-features = false, features = ["rust_backend"] }
futures-util = "0.3.31"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
tracing = { version = "0.1", optional = true }
tracing-subscriber = { version = "0.3", features = ["env-filter"], optional = true }
gfxinfo = "0.1.2"
once_cell = "1.19"
axum-server = { version = "0.7", features = ["tls-rustls"] }
rustls = { version = "0.23", features = ["aws-lc-rs"] }
rustls-pemfile = "2.1"
rcgen = "0.13"
anyhow = "1"
hostname = "0.3"
prost = { workspace = true }
time = { version = "0.3", default-features = false, features = ["formatting", "macros", "parsing" ] }
[features]
default = []
logging = ["tracing", "tracing-subscriber"]
[build-dependencies]
prost-build = "0.13"
tonic-build = { version = "0.12", default-features = false, optional = true }
protoc-bin-vendored = "3"
[dev-dependencies]
assert_cmd = "2.0"
tempfile = "3.10"
tokio-tungstenite = "0.21"

396
socktop_agent/README.md Normal file
View File

@ -0,0 +1,396 @@
# socktop_agent (server)
Lightweight ondemand metrics WebSocket server for the socktop TUI.
Highlights:
- Collects system metrics only when requested (keeps idle CPU <1%)
- Optional TLS (selfsigned cert autogenerated & pinned by client)
- JSON for fast metrics / disks; protobuf (optionally gzipped) for processes
- Accurate perprocess CPU% on Linux via /proc jiffies delta
- Optional GPU & temperature metrics (disable via env vars)
- Simple token auth (?token=...) support
Run (no TLS):
```
cargo install socktop_agent
socktop_agent --port 3000
```
Enable TLS:
```
SOCKTOP_ENABLE_SSL=1 socktop_agent --port 8443
# cert/key stored under $XDG_DATA_HOME/socktop_agent/tls
```
Environment toggles:
- SOCKTOP_AGENT_GPU=0 (disable GPU collection)
- SOCKTOP_AGENT_TEMP=0 (disable temperature)
- SOCKTOP_TOKEN=secret (require token param from client)
- SOCKTOP_AGENT_METRICS_TTL_MS=250 (cache fast metrics window)
- SOCKTOP_AGENT_PROCESSES_TTL_MS=1000
- SOCKTOP_AGENT_DISKS_TTL_MS=1000
*NOTE ON ENV vars*
Generally these have been added for debugging purposes. you do not need to configure them, default values are tuned and GPU will deisable itself after the first poll if not available.
Systemd unit example & full docs:
https://github.com/jasonwitty/socktop
## WebSocket API Integration Guide
The socktop_agent exposes a WebSocket API that can be directly integrated with your own applications. This allows you to build custom monitoring dashboards or analysis tools using the agent's metrics.
### WebSocket Endpoint
```
ws://HOST:PORT/ws # Without TLS
wss://HOST:PORT/ws # With TLS
```
With authentication token (if configured):
```
ws://HOST:PORT/ws?token=YOUR_TOKEN
wss://HOST:PORT/ws?token=YOUR_TOKEN
```
### Communication Protocol
All communication uses JSON format for requests and responses, except for the process list which uses Protocol Buffers (protobuf) format with optional gzip compression.
#### Request Types
Send a JSON message with a `type` field to request specific metrics:
```json
{"type": "metrics"} // Request fast-changing metrics (CPU, memory, network)
{"type": "disks"} // Request disk information
{"type": "processes"} // Request process list (returns protobuf)
```
#### Response Formats
1. **Fast Metrics** (JSON):
```json
{
"cpu_total": 12.4,
"cpu_per_core": [11.2, 15.7],
"mem_total": 33554432,
"mem_used": 18321408,
"swap_total": 0,
"swap_used": 0,
"hostname": "myserver",
"cpu_temp_c": 42.5,
"networks": [{"name":"eth0","received":12345678,"transmitted":87654321}],
"gpus": [{"name":"nvidia-0","usage":56.7,"memory_total":8589934592,"memory_used":1073741824,"temp_c":65.0}]
}
```
2. **Disks** (JSON):
```json
[
{"name":"nvme0n1p2","total":512000000000,"available":320000000000},
{"name":"sda1","total":1000000000000,"available":750000000000}
]
```
3. **Processes** (Protocol Buffers):
Processes are returned in Protocol Buffers format, optionally gzip-compressed for large process lists. The protobuf schema is:
```protobuf
syntax = "proto3";
message Process {
uint32 pid = 1;
string name = 2;
float cpu_usage = 3;
uint64 mem_bytes = 4;
}
message ProcessList {
uint32 process_count = 1;
repeated Process processes = 2;
}
```
### Example Integration (JavaScript/Node.js)
```javascript
const WebSocket = require('ws');
// Connect to the agent
const ws = new WebSocket('ws://localhost:3000/ws');
ws.on('open', function open() {
console.log('Connected to socktop_agent');
// Request metrics immediately on connection
ws.send(JSON.stringify({type: 'metrics'}));
// Set up regular polling
setInterval(() => {
ws.send(JSON.stringify({type: 'metrics'}));
}, 1000);
// Request processes every 3 seconds
setInterval(() => {
ws.send(JSON.stringify({type: 'processes'}));
}, 3000);
});
ws.on('message', function incoming(data) {
// Check if the response is JSON or binary (protobuf)
try {
const jsonData = JSON.parse(data);
console.log('Received JSON data:', jsonData);
} catch (e) {
console.log('Received binary data (protobuf), length:', data.length);
// Process binary protobuf data with a library like protobufjs
}
});
ws.on('close', function close() {
console.log('Disconnected from socktop_agent');
});
```
### Example Integration (Python)
```python
import json
import asyncio
import websockets
async def monitor_system():
uri = "ws://localhost:3000/ws"
async with websockets.connect(uri) as websocket:
print("Connected to socktop_agent")
# Request initial metrics
await websocket.send(json.dumps({"type": "metrics"}))
# Set up regular polling
while True:
# Request metrics
await websocket.send(json.dumps({"type": "metrics"}))
# Receive and process response
response = await websocket.recv()
# Check if response is JSON or binary (protobuf)
try:
data = json.loads(response)
print(f"CPU: {data['cpu_total']}%, Memory: {data['mem_used']/data['mem_total']*100:.1f}%")
except json.JSONDecodeError:
print(f"Received binary data, length: {len(response)}")
# Process binary protobuf data with a library like protobuf
# Wait before next poll
await asyncio.sleep(1)
asyncio.run(monitor_system())
```
### Notes for Integration
1. **Error Handling**: The WebSocket connection may close unexpectedly; implement reconnection logic in your client.
2. **Rate Limiting**: Avoid excessive polling that could impact the system being monitored. Recommended intervals:
- Metrics: 500ms or slower
- Processes: 2000ms or slower
- Disks: 5000ms or slower
3. **Authentication**: If the agent is configured with a token, always include it in the WebSocket URL.
4. **Protocol Buffers Handling**: For processing the binary process list data, use a Protocol Buffers library for your language and the schema provided in the `proto/processes.proto` file.
5. **Compression**: Process lists may be gzip-compressed. Check if the response starts with the gzip magic bytes (`0x1f, 0x8b`) and decompress if necessary.
## LLM Integration Guide
If you're using an LLM to generate code for integrating with socktop_agent, this section provides structured information to help the model understand the API better.
### API Schema
```yaml
# WebSocket API Schema for socktop_agent
endpoint: ws://HOST:PORT/ws or wss://HOST:PORT/ws (with TLS)
authentication:
type: query parameter
parameter: token
example: ws://HOST:PORT/ws?token=YOUR_TOKEN
requests:
- type: metrics
format: JSON
example: {"type": "metrics"}
description: Fast-changing metrics (CPU, memory, network)
- type: disks
format: JSON
example: {"type": "disks"}
description: Disk information
- type: processes
format: JSON
example: {"type": "processes"}
description: Process list (returns protobuf)
responses:
- request_type: metrics
format: JSON
schema:
cpu_total: float # percentage of total CPU usage
cpu_per_core: [float] # array of per-core CPU usage percentages
mem_total: uint64 # total memory in bytes
mem_used: uint64 # used memory in bytes
swap_total: uint64 # total swap in bytes
swap_used: uint64 # used swap in bytes
hostname: string # system hostname
cpu_temp_c: float? # CPU temperature in Celsius (optional)
networks: [
{
name: string # network interface name
received: uint64 # total bytes received
transmitted: uint64 # total bytes transmitted
}
]
gpus: [
{
name: string # GPU device name
usage: float # GPU usage percentage
memory_total: uint64 # total GPU memory in bytes
memory_used: uint64 # used GPU memory in bytes
temp_c: float # GPU temperature in Celsius
}
]?
- request_type: disks
format: JSON
schema:
[
{
name: string # disk name
total: uint64 # total space in bytes
available: uint64 # available space in bytes
}
]
- request_type: processes
format: Protocol Buffers (optionally gzip-compressed)
schema: See protobuf definition below
```
### Protobuf Schema (processes.proto)
```protobuf
syntax = "proto3";
message Process {
uint32 pid = 1;
string name = 2;
float cpu_usage = 3;
uint64 mem_bytes = 4;
}
message ProcessList {
uint32 process_count = 1;
repeated Process processes = 2;
}
```
### Step-by-Step Integration Pseudocode
```
1. Establish WebSocket connection to ws://HOST:PORT/ws
- Add token if required: ws://HOST:PORT/ws?token=YOUR_TOKEN
2. For regular metrics updates:
- Send: {"type": "metrics"}
- Parse JSON response
- Extract CPU, memory, network info
3. For disk information:
- Send: {"type": "disks"}
- Parse JSON response
- Extract disk usage data
4. For process list:
- Send: {"type": "processes"}
- Check if response is binary
- If starts with 0x1f, 0x8b bytes:
- Decompress using gzip
- Parse binary data using protobuf schema
- Extract process information
5. Implement reconnection logic:
- On connection close/error
- Use exponential backoff
6. Respect rate limits:
- metrics: ≥ 500ms interval
- disks: ≥ 5000ms interval
- processes: ≥ 2000ms interval
```
### Common Implementation Patterns
**Pattern 1: Periodic Polling**
```javascript
// Set up separate timers for different metric types
const metricsInterval = setInterval(() => ws.send(JSON.stringify({type: 'metrics'})), 500);
const disksInterval = setInterval(() => ws.send(JSON.stringify({type: 'disks'})), 5000);
const processesInterval = setInterval(() => ws.send(JSON.stringify({type: 'processes'})), 2000);
// Clean up on disconnect
ws.on('close', () => {
clearInterval(metricsInterval);
clearInterval(disksInterval);
clearInterval(processesInterval);
});
```
**Pattern 2: Processing Binary Protobuf Data**
```javascript
// Using protobufjs
const root = protobuf.loadSync('processes.proto');
const ProcessList = root.lookupType('ProcessList');
ws.on('message', function(data) {
if (typeof data !== 'string') {
// Check for gzip compression
if (data[0] === 0x1f && data[1] === 0x8b) {
data = gunzipSync(data); // Use appropriate decompression library
}
// Decode protobuf
const processes = ProcessList.decode(new Uint8Array(data));
console.log(`Total processes: ${processes.process_count}`);
processes.processes.forEach(p => {
console.log(`PID: ${p.pid}, Name: ${p.name}, CPU: ${p.cpu_usage}%`);
});
}
});
```
**Pattern 3: Reconnection Logic**
```javascript
function connect() {
const ws = new WebSocket('ws://localhost:3000/ws');
ws.on('open', () => {
console.log('Connected');
// Start polling
});
ws.on('close', () => {
console.log('Connection lost, reconnecting...');
setTimeout(connect, 1000); // Reconnect after 1 second
});
// Handle other events...
}
connect();
```

14
socktop_agent/build.rs Normal file
View File

@ -0,0 +1,14 @@
fn main() {
// Vendored protoc for reproducible builds
let protoc = protoc_bin_vendored::protoc_bin_path().expect("protoc");
println!("cargo:rerun-if-changed=proto/processes.proto");
// Compile protobuf definitions for processes
let mut cfg = prost_build::Config::new();
cfg.out_dir(std::env::var("OUT_DIR").unwrap());
cfg.protoc_executable(protoc); // Use the vendored protoc directly
// Use local path (ensures file is inside published crate tarball)
cfg.compile_protos(&["proto/processes.proto"], &["proto"]) // relative to CARGO_MANIFEST_DIR
.expect("compile protos");
}

View File

@ -0,0 +1,15 @@
syntax = "proto3";
package socktop;
// All running processes. Sorting is done client-side.
message Processes {
uint64 process_count = 1; // total processes in the system
repeated Process rows = 2; // all processes
}
message Process {
uint32 pid = 1;
string name = 2;
float cpu_usage = 3; // 0..100
uint64 mem_bytes = 4; // RSS bytes
}

View File

@ -0,0 +1,95 @@
//! Caching for process metrics and journal entries
use std::collections::HashMap;
use std::time::{Duration, Instant};
use tokio::sync::RwLock;
use crate::types::{ProcessMetricsResponse, JournalResponse};
#[derive(Debug, Clone)]
struct CacheEntry<T> {
data: T,
cached_at: Instant,
ttl: Duration,
}
impl<T> CacheEntry<T> {
fn is_expired(&self) -> bool {
self.cached_at.elapsed() > self.ttl
}
}
#[derive(Debug)]
pub struct ProcessCache {
process_metrics: RwLock<HashMap<u32, CacheEntry<ProcessMetricsResponse>>>,
journal_entries: RwLock<HashMap<u32, CacheEntry<JournalResponse>>>,
}
impl ProcessCache {
pub fn new() -> Self {
Self {
process_metrics: RwLock::new(HashMap::new()),
journal_entries: RwLock::new(HashMap::new()),
}
}
/// Get cached process metrics if available and not expired (250ms TTL)
pub async fn get_process_metrics(&self, pid: u32) -> Option<ProcessMetricsResponse> {
let cache = self.process_metrics.read().await;
if let Some(entry) = cache.get(&pid) {
if !entry.is_expired() {
return Some(entry.data.clone());
}
}
None
}
/// Cache process metrics with 250ms TTL
pub async fn set_process_metrics(&self, pid: u32, data: ProcessMetricsResponse) {
let mut cache = self.process_metrics.write().await;
cache.insert(pid, CacheEntry {
data,
cached_at: Instant::now(),
ttl: Duration::from_millis(250),
});
}
/// Get cached journal entries if available and not expired (1s TTL)
pub async fn get_journal_entries(&self, pid: u32) -> Option<JournalResponse> {
let cache = self.journal_entries.read().await;
if let Some(entry) = cache.get(&pid) {
if !entry.is_expired() {
return Some(entry.data.clone());
}
}
None
}
/// Cache journal entries with 1s TTL
pub async fn set_journal_entries(&self, pid: u32, data: JournalResponse) {
let mut cache = self.journal_entries.write().await;
cache.insert(pid, CacheEntry {
data,
cached_at: Instant::now(),
ttl: Duration::from_secs(1),
});
}
/// Clean up expired entries periodically
pub async fn cleanup_expired(&self) {
{
let mut cache = self.process_metrics.write().await;
cache.retain(|_, entry| !entry.is_expired());
}
{
let mut cache = self.journal_entries.write().await;
cache.retain(|_, entry| !entry.is_expired());
}
}
}
impl Default for ProcessCache {
fn default() -> Self {
Self::new()
}
}

24
socktop_agent/src/gpu.rs Normal file
View File

@ -0,0 +1,24 @@
// gpu.rs
use gfxinfo::active_gpu;
#[derive(Debug, Clone, serde::Serialize)]
pub struct GpuMetrics {
pub name: String,
pub utilization_gpu_pct: u32, // 0..100
pub mem_used_bytes: u64,
pub mem_total_bytes: u64,
}
pub fn collect_all_gpus() -> Result<Vec<GpuMetrics>, Box<dyn std::error::Error>> {
let gpu = active_gpu()?; // Use ? to unwrap Result
let info = gpu.info();
let metrics = GpuMetrics {
name: gpu.model().to_string(),
utilization_gpu_pct: info.load_pct() as u32,
mem_used_bytes: info.used_vram(),
mem_total_bytes: info.total_vram(),
};
Ok(vec![metrics])
}

17
socktop_agent/src/lib.rs Normal file
View File

@ -0,0 +1,17 @@
//! Library interface for socktop_agent functionality
//! This allows testing of agent functions.
pub mod gpu;
pub mod metrics;
pub mod proto;
pub mod state;
pub mod tls;
pub mod types;
pub mod ws;
// Re-export commonly used types and functions for testing
pub use metrics::{collect_journal_entries, collect_process_metrics};
pub use state::{AppState, CacheEntry};
pub use types::{
DetailedProcessInfo, JournalEntry, JournalResponse, LogLevel, ProcessMetricsResponse,
};

View File

@ -1,136 +1,134 @@
//! socktop agent entrypoint: sets up sysinfo handles, launches a sampler,
//! and serves a WebSocket endpoint at /ws.
//! socktop agent entrypoint: sets up sysinfo handles and serves a WebSocket endpoint at /ws.
mod gpu;
mod metrics;
mod sampler;
mod proto;
// sampler module removed (metrics now purely request-driven)
mod state;
mod ws;
mod types;
mod ws;
use axum::{routing::get, Router};
use std::{collections::HashMap, net::SocketAddr, sync::Arc, time::Duration, sync::atomic::AtomicUsize};
use sysinfo::{
Components, CpuRefreshKind, Disks, MemoryRefreshKind, Networks, ProcessRefreshKind, RefreshKind,
System,
};
use tokio::sync::{Mutex, RwLock, Notify};
use tracing_subscriber::EnvFilter;
use axum::{Router, http::StatusCode, routing::get};
use std::net::SocketAddr;
use std::str::FromStr;
use state::{AppState, SharedTotals};
use sampler::spawn_sampler;
use ws::ws_handler;
mod tls;
#[tokio::main]
async fn main() {
use state::AppState;
// Init logging; configure with RUST_LOG (e.g., RUST_LOG=info).
tracing_subscriber::fmt()
.with_env_filter(EnvFilter::from_default_env())
.with_target(false)
.compact()
.init();
fn arg_flag(name: &str) -> bool {
std::env::args().any(|a| a == name)
}
fn arg_value(name: &str) -> Option<String> {
let mut it = std::env::args();
while let Some(a) = it.next() {
if a == name {
return it.next();
}
}
None
}
// sysinfo build specifics (scopes what refresh_all() will touch internally)
let refresh_kind = RefreshKind::nothing()
.with_cpu(CpuRefreshKind::everything())
.with_memory(MemoryRefreshKind::everything())
.with_processes(ProcessRefreshKind::everything());
fn main() -> anyhow::Result<()> {
// Install rustls crypto provider before any TLS operations
// This is required when using axum-server's tls-rustls feature
rustls::crypto::aws_lc_rs::default_provider()
.install_default()
.ok(); // Ignore error if already installed
// Initialize sysinfo handles once and keep them alive
let mut sys = System::new_with_specifics(refresh_kind);
sys.refresh_all();
#[cfg(feature = "logging")]
tracing_subscriber::fmt::init();
let mut nets = Networks::new();
nets.refresh(true);
// Configure Tokio runtime with optimized thread pool for reduced overhead.
//
// The agent is primarily I/O-bound (WebSocket, /proc file reads, sysinfo)
// with no CPU-intensive or blocking operations, so a smaller thread pool
// is beneficial:
//
// Benefits:
// - Lower memory footprint (~1-2MB per thread saved)
// - Reduced context switching overhead
// - Fewer idle threads consuming resources
// - Better for resource-constrained systems
//
// Trade-offs:
// - Slightly reduced throughput under very high concurrent connections
// - Could introduce latency if blocking operations are added (don't do this!)
//
// Default: 2 threads (sufficient for typical workloads with 1-10 clients)
// Override: Set SOCKTOP_WORKER_THREADS=4 to use more threads if needed
//
// Note: Default Tokio uses num_cpus threads which is excessive for this workload.
let mut components = Components::new();
components.refresh(true);
let worker_threads = std::env::var("SOCKTOP_WORKER_THREADS")
.ok()
.and_then(|s| s.parse::<usize>().ok())
.unwrap_or(2)
.clamp(1, 16); // Ensure 1-16 threads
let mut disks = Disks::new();
disks.refresh(true);
let runtime = tokio::runtime::Builder::new_multi_thread()
.worker_threads(worker_threads)
.thread_name("socktop-agent")
.enable_all()
.build()?;
// Shared state across requests
let state = AppState {
sys: Arc::new(Mutex::new(sys)),
nets: Arc::new(Mutex::new(nets)),
net_totals: Arc::new(Mutex::new(HashMap::<String, (u64, u64)>::new())) as SharedTotals,
components: Arc::new(Mutex::new(components)),
disks: Arc::new(Mutex::new(disks)),
last_json: Arc::new(RwLock::new(String::new())),
// new: adaptive sampling controls
client_count: Arc::new(AtomicUsize::new(0)),
wake_sampler: Arc::new(Notify::new()),
auth_token: std::env::var("SOCKTOP_TOKEN").ok().filter(|s| !s.is_empty()),
};
runtime.block_on(async_main())
}
// Start background sampler (adjust cadence as needed)
let _sampler = spawn_sampler(state.clone(), Duration::from_millis(500));
async fn async_main() -> anyhow::Result<()> {
// Version flag (print and exit). Keep before heavy initialization.
if arg_flag("--version") || arg_flag("-V") {
println!("socktop_agent {}", env!("CARGO_PKG_VERSION"));
return Ok(());
}
// Web app
let port = resolve_port();
let app = Router::new().route("/ws", get(ws_handler)).with_state(state);
let state = AppState::new();
// No background samplers: metrics collected on-demand per websocket request.
// Web app: route /ws to the websocket handler
async fn healthz() -> StatusCode {
println!("/healthz request");
StatusCode::OK
}
let app = Router::new()
.route("/ws", get(ws::ws_handler))
.route("/healthz", get(healthz))
.with_state(state.clone());
let enable_ssl =
arg_flag("--enableSSL") || std::env::var("SOCKTOP_ENABLE_SSL").ok().as_deref() == Some("1");
if enable_ssl {
// Port can be overridden by --port or SOCKTOP_PORT; default to 8443 when SSL
let port = arg_value("--port")
.or_else(|| arg_value("-p"))
.or_else(|| std::env::var("SOCKTOP_PORT").ok())
.and_then(|s| s.parse::<u16>().ok())
.unwrap_or(8443);
let (cert_path, key_path) = tls::ensure_self_signed_cert()?;
let cfg = axum_server::tls_rustls::RustlsConfig::from_pem_file(cert_path, key_path).await?;
let addr = SocketAddr::from_str(&format!("0.0.0.0:{port}"))?;
println!("socktop_agent: TLS enabled. Listening on wss://{addr}/ws");
axum_server::bind_rustls(addr, cfg)
.serve(app.into_make_service())
.await?;
return Ok(());
}
// Non-TLS HTTP/WS path
let port = arg_value("--port")
.or_else(|| arg_value("-p"))
.or_else(|| std::env::var("SOCKTOP_PORT").ok())
.and_then(|s| s.parse::<u16>().ok())
.unwrap_or(3000);
let addr = SocketAddr::from(([0, 0, 0, 0], port));
//output to console
println!("Remote agent running at http://{}", addr);
println!("WebSocket endpoint: ws://{}/ws", addr);
//trace logging
tracing::info!("Remote agent running at http://{} (ws at /ws)", addr);
tracing::info!("WebSocket endpoint: ws://{}/ws", addr);
let listener = tokio::net::TcpListener::bind(addr).await.unwrap();
axum::serve(listener, app).await.unwrap();
}
// Resolve the listening port from CLI args/env with a 3000 default.
// Supports: --port <PORT>, -p <PORT>, a bare numeric positional arg, or SOCKTOP_PORT.
fn resolve_port() -> u16 {
const DEFAULT: u16 = 3000;
// Env takes precedence over positional, but is overridden by explicit flags if present.
if let Ok(s) = std::env::var("SOCKTOP_PORT") {
if let Ok(p) = s.parse::<u16>() {
if p != 0 {
return p;
}
}
eprintln!("Warning: invalid SOCKTOP_PORT='{}'; using default {}", s, DEFAULT);
}
let mut args = std::env::args().skip(1);
while let Some(arg) = args.next() {
match arg.as_str() {
"--port" | "-p" => {
if let Some(v) = args.next() {
match v.parse::<u16>() {
Ok(p) if p != 0 => return p,
_ => {
eprintln!("Invalid port '{}'; using default {}", v, DEFAULT);
return DEFAULT;
}
}
} else {
eprintln!("Missing value for {} ; using default {}", arg, DEFAULT);
return DEFAULT;
}
}
"--help" | "-h" => {
println!("Usage: socktop_agent [--port <PORT>] [PORT]\n SOCKTOP_PORT=<PORT> socktop_agent");
std::process::exit(0);
}
s => {
if let Ok(p) = s.parse::<u16>() {
if p != 0 {
return p;
}
}
}
}
}
DEFAULT
println!("socktop_agent: Listening on ws://{addr}/ws");
axum_server::bind(addr)
.serve(app.into_make_service())
.await?;
Ok(())
}
// Unit tests for CLI parsing moved to `tests/port_parse.rs`.

File diff suppressed because it is too large Load Diff

View File

@ -1,32 +1,5 @@
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Metrics {
pub ts_unix_ms: i64,
pub host: String,
pub uptime_secs: u64,
pub cpu_overall: f32,
pub cpu_per_core: Vec<f32>,
pub load_avg: (f64, f64, f64),
pub mem_total_mb: u64,
pub mem_used_mb: u64,
pub swap_total_mb: u64,
pub swap_used_mb: u64,
pub net_aggregate: NetTotals,
pub top_processes: Vec<Proc>,
// Generated protobuf modules live under OUT_DIR; include them here.
// This module will expose socktop::Processes and socktop::Process types.
pub mod pb {
include!(concat!(env!("OUT_DIR"), "/socktop.rs"));
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct NetTotals {
pub rx_bytes: u64,
pub tx_bytes: u64,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Proc {
pub pid: i32,
pub name: String,
pub cpu: f32,
pub mem_mb: u64,
pub status: String,
}

View File

@ -1,36 +0,0 @@
//! Background sampler: periodically collects metrics and updates a JSON cache,
//! so WS replies are just a read of the cached string.
use crate::metrics::collect_metrics;
use crate::state::AppState;
//use serde_json::to_string;
use tokio::task::JoinHandle;
use tokio::time::{Duration, interval, MissedTickBehavior};
pub fn spawn_sampler(state: AppState, period: Duration) -> JoinHandle<()> {
tokio::spawn(async move {
let idle_period = Duration::from_secs(10);
loop {
let active = state.client_count.load(std::sync::atomic::Ordering::Relaxed) > 0;
let mut ticker = interval(if active { period } else { idle_period });
ticker.set_missed_tick_behavior(MissedTickBehavior::Skip);
ticker.tick().await;
if !active {
tokio::select! {
_ = ticker.tick() => {},
_ = state.wake_sampler.notified() => continue,
}
}
if let Ok(json) = async {
let m = collect_metrics(&state).await;
serde_json::to_string(&m)
}
.await
{
*state.last_json.write().await = json;
}
}
})
}

View File

@ -1,30 +1,140 @@
//! Shared agent state: sysinfo handles and hot JSON cache.
use std::{collections::HashMap, sync::Arc};
use std::sync::atomic::AtomicUsize;
use std::collections::HashMap;
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, AtomicUsize};
use std::time::{Duration, Instant};
use sysinfo::{Components, Disks, Networks, System};
use tokio::sync::{Mutex, RwLock, Notify};
use tokio::sync::Mutex;
pub type SharedSystem = Arc<Mutex<System>>;
pub type SharedNetworks = Arc<Mutex<Networks>>;
pub type SharedTotals = Arc<Mutex<HashMap<String, (u64, u64)>>>;
pub type SharedComponents = Arc<Mutex<Components>>;
pub type SharedDisks = Arc<Mutex<Disks>>;
pub type SharedNetworks = Arc<Mutex<Networks>>;
#[cfg(target_os = "linux")]
#[derive(Default)]
pub struct ProcCpuTracker {
pub last_total: u64,
pub last_per_pid: HashMap<u32, u64>,
}
#[cfg(not(target_os = "linux"))]
pub struct ProcessCache {
pub names: HashMap<u32, String>,
pub reusable_vec: Vec<crate::types::ProcessInfo>,
}
#[cfg(not(target_os = "linux"))]
impl Default for ProcessCache {
fn default() -> Self {
Self {
names: HashMap::with_capacity(1000), // Pre-allocate for typical modern system process count
reusable_vec: Vec::with_capacity(1000),
}
}
}
#[derive(Clone)]
pub struct AppState {
// Persistent sysinfo handles
pub sys: SharedSystem,
pub nets: SharedNetworks,
pub net_totals: SharedTotals, // iface -> (rx_total, tx_total)
pub components: SharedComponents,
pub disks: SharedDisks,
pub networks: SharedNetworks,
pub hostname: String,
// Last serialized JSON snapshot for fast WS responses
pub last_json: Arc<RwLock<String>>,
// For correct per-process CPU% using /proc deltas (Linux only path uses this tracker)
#[cfg(target_os = "linux")]
pub proc_cpu: Arc<Mutex<ProcCpuTracker>>,
// Adaptive sampling controls
// Process name caching and vector reuse for non-Linux to reduce allocations
#[cfg(not(target_os = "linux"))]
pub proc_cache: Arc<Mutex<ProcessCache>>,
// Connection tracking (to allow future idle sleeps if desired)
pub client_count: Arc<AtomicUsize>,
pub wake_sampler: Arc<Notify>,
pub auth_token: Option<String>,
}
// GPU negative cache (probe once). gpu_checked=true after first attempt; gpu_present reflects result.
pub gpu_checked: Arc<AtomicBool>,
pub gpu_present: Arc<AtomicBool>,
// Lightweight on-demand caches (TTL based) to cap CPU under bursty polling.
pub cache_metrics: Arc<Mutex<CacheEntry<crate::types::Metrics>>>,
pub cache_disks: Arc<Mutex<CacheEntry<Vec<crate::types::DiskInfo>>>>,
pub cache_processes: Arc<Mutex<CacheEntry<crate::types::ProcessesPayload>>>,
// Process detail caches (per-PID)
pub cache_process_metrics:
Arc<Mutex<HashMap<u32, CacheEntry<crate::types::ProcessMetricsResponse>>>>,
pub cache_journal_entries: Arc<Mutex<HashMap<u32, CacheEntry<crate::types::JournalResponse>>>>,
}
#[derive(Clone, Debug)]
pub struct CacheEntry<T> {
pub at: Option<Instant>,
pub value: Option<T>,
}
impl<T> Default for CacheEntry<T> {
fn default() -> Self {
Self::new()
}
}
impl<T> CacheEntry<T> {
pub fn new() -> Self {
Self {
at: None,
value: None,
}
}
pub fn is_fresh(&self, ttl: Duration) -> bool {
self.at.is_some_and(|t| t.elapsed() < ttl) && self.value.is_some()
}
pub fn set(&mut self, v: T) {
self.value = Some(v);
self.at = Some(Instant::now());
}
pub fn get(&self) -> Option<&T> {
self.value.as_ref()
}
}
impl Default for AppState {
fn default() -> Self {
Self::new()
}
}
impl AppState {
pub fn new() -> Self {
let sys = System::new();
let components = Components::new_with_refreshed_list();
let disks = Disks::new_with_refreshed_list();
let networks = Networks::new_with_refreshed_list();
Self {
sys: Arc::new(Mutex::new(sys)),
components: Arc::new(Mutex::new(components)),
disks: Arc::new(Mutex::new(disks)),
networks: Arc::new(Mutex::new(networks)),
hostname: System::host_name().unwrap_or_else(|| "unknown".into()),
#[cfg(target_os = "linux")]
proc_cpu: Arc::new(Mutex::new(ProcCpuTracker::default())),
#[cfg(not(target_os = "linux"))]
proc_cache: Arc::new(Mutex::new(ProcessCache::default())),
client_count: Arc::new(AtomicUsize::new(0)),
auth_token: std::env::var("SOCKTOP_TOKEN")
.ok()
.filter(|s| !s.is_empty()),
gpu_checked: Arc::new(AtomicBool::new(false)),
gpu_present: Arc::new(AtomicBool::new(false)),
cache_metrics: Arc::new(Mutex::new(CacheEntry::new())),
cache_disks: Arc::new(Mutex::new(CacheEntry::new())),
cache_processes: Arc::new(Mutex::new(CacheEntry::new())),
cache_process_metrics: Arc::new(Mutex::new(HashMap::new())),
cache_journal_entries: Arc::new(Mutex::new(HashMap::new())),
}
}
}

91
socktop_agent/src/tls.rs Normal file
View File

@ -0,0 +1,91 @@
use rcgen::{CertificateParams, DistinguishedName, DnType, IsCa, SanType};
use std::{
fs,
io::Write,
net::{IpAddr, Ipv4Addr},
path::{Path, PathBuf},
};
use time::{Duration, OffsetDateTime};
fn config_dir() -> PathBuf {
std::env::var_os("XDG_CONFIG_HOME")
.map(PathBuf::from)
.or_else(|| std::env::var_os("HOME").map(|h| Path::new(&h).join(".config")))
.unwrap_or_else(|| PathBuf::from("."))
.join("socktop_agent")
.join("tls")
}
pub fn cert_paths() -> (PathBuf, PathBuf) {
let dir = config_dir();
(dir.join("cert.pem"), dir.join("key.pem"))
}
pub fn ensure_self_signed_cert() -> anyhow::Result<(PathBuf, PathBuf)> {
let (cert_path, key_path) = cert_paths();
if cert_path.exists() && key_path.exists() {
return Ok((cert_path, key_path));
}
fs::create_dir_all(cert_path.parent().unwrap())?;
let hostname = hostname::get()
.ok()
.and_then(|s| s.into_string().ok())
.unwrap_or_else(|| "localhost".to_string());
let mut params = CertificateParams::new(vec![hostname.clone(), "localhost".into()])?;
params
.subject_alt_names
.push(SanType::IpAddress(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1))));
params.subject_alt_names.push(SanType::IpAddress(IpAddr::V6(
::std::net::Ipv6Addr::LOCALHOST,
)));
params
.subject_alt_names
.push(SanType::IpAddress(IpAddr::V4(Ipv4Addr::UNSPECIFIED)));
// Allow operator to provide extra SANs (comma-separated), e.g. IPs or DNS names
if let Ok(extra) = std::env::var("SOCKTOP_AGENT_EXTRA_SANS") {
for raw in extra.split(',') {
let s = raw.trim();
if s.is_empty() {
continue;
}
if let Ok(ip) = s.parse::<IpAddr>() {
params.subject_alt_names.push(SanType::IpAddress(ip));
} else {
match s.to_string().try_into() {
Ok(dns) => params.subject_alt_names.push(SanType::DnsName(dns)),
Err(_) => eprintln!("socktop_agent: ignoring invalid SAN entry: {s}"),
}
}
}
}
let mut dn = DistinguishedName::new();
dn.push(DnType::CommonName, hostname.clone());
params.distinguished_name = dn;
params.is_ca = IsCa::NoCa;
// Dynamic validity: start slightly in the past to avoid clock skew issues, end ~397 days later
let now = OffsetDateTime::now_utc();
params.not_before = now - Duration::minutes(5);
params.not_after = now + Duration::days(397);
// Generate key pair (default is ECDSA P256 SHA256)
let key_pair = rcgen::KeyPair::generate()?; // defaults to ECDSA P256 SHA256
let cert = params.self_signed(&key_pair)?;
let cert_pem = cert.pem();
let key_pem = key_pair.serialize_pem();
let mut f = fs::File::create(&cert_path)?;
f.write_all(cert_pem.as_bytes())?;
let mut k = fs::File::create(&key_path)?;
k.write_all(key_pem.as_bytes())?;
println!(
"socktop_agent: generated self-signed TLS certificate at {}",
cert_path.display()
);
println!("socktop_agent: private key at {}", key_path.display());
Ok((cert_path, key_path))
}

View File

@ -1,9 +1,26 @@
//! Data types sent to the client over WebSocket.
//! Keep this module minimal and stable — it defines the wire format.
use crate::gpu::GpuMetrics;
use serde::Serialize;
#[derive(Debug, Serialize, Clone)]
#[derive(Debug, Clone, Serialize)]
pub struct DiskInfo {
pub name: String,
pub total: u64,
pub available: u64,
pub temperature: Option<f32>,
pub is_partition: bool,
}
#[derive(Debug, Clone, Serialize)]
pub struct NetworkInfo {
pub name: String,
pub received: u64,
pub transmitted: u64,
}
#[derive(Debug, Clone, Serialize)]
pub struct ProcessInfo {
pub pid: u32,
pub name: String,
@ -11,22 +28,7 @@ pub struct ProcessInfo {
pub mem_bytes: u64,
}
#[derive(Debug, Serialize, Clone)]
pub struct DiskInfo {
pub name: String,
pub total: u64,
pub available: u64,
}
#[derive(Debug, Serialize, Clone)]
pub struct NetworkInfo {
pub name: String,
// cumulative totals since the agent started (client should diff to get rates)
pub received: u64,
pub transmitted: u64,
}
#[derive(Debug, Serialize, Clone)]
#[derive(Debug, Clone, Serialize)]
pub struct Metrics {
pub cpu_total: f32,
pub cpu_per_core: Vec<f32>,
@ -34,10 +36,89 @@ pub struct Metrics {
pub mem_used: u64,
pub swap_total: u64,
pub swap_used: u64,
pub process_count: usize,
pub hostname: String,
pub cpu_temp_c: Option<f32>,
pub disks: Vec<DiskInfo>,
pub networks: Vec<NetworkInfo>,
pub top_processes: Vec<ProcessInfo>,
}
pub gpus: Option<Vec<GpuMetrics>>,
}
#[derive(Debug, Clone, Serialize)]
pub struct ProcessesPayload {
pub process_count: usize,
pub top_processes: Vec<ProcessInfo>,
}
#[derive(Debug, Clone, Serialize)]
pub struct ThreadInfo {
pub tid: u32, // Thread ID
pub name: String, // Thread name (from /proc/{pid}/task/{tid}/comm)
pub cpu_time_user: u64, // User CPU time in microseconds
pub cpu_time_system: u64, // System CPU time in microseconds
pub status: String, // Thread status (Running, Sleeping, etc.)
}
#[derive(Debug, Clone, Serialize)]
pub struct DetailedProcessInfo {
pub pid: u32,
pub name: String,
pub command: String,
pub cpu_usage: f32,
pub mem_bytes: u64,
pub virtual_mem_bytes: u64,
pub shared_mem_bytes: Option<u64>,
pub thread_count: u32,
pub fd_count: Option<u32>,
pub status: String,
pub parent_pid: Option<u32>,
pub user_id: u32,
pub group_id: u32,
pub start_time: u64, // Unix timestamp
pub cpu_time_user: u64, // Microseconds
pub cpu_time_system: u64, // Microseconds
pub read_bytes: Option<u64>,
pub write_bytes: Option<u64>,
pub working_directory: Option<String>,
pub executable_path: Option<String>,
pub child_processes: Vec<DetailedProcessInfo>,
pub threads: Vec<ThreadInfo>,
}
#[derive(Debug, Clone, Serialize)]
pub struct ProcessMetricsResponse {
pub process: DetailedProcessInfo,
pub cached_at: u64, // Unix timestamp when this data was cached
}
#[derive(Debug, Clone, Serialize)]
pub struct JournalEntry {
pub timestamp: String, // ISO 8601 formatted timestamp
pub priority: LogLevel,
pub message: String,
pub unit: Option<String>, // systemd unit name
pub pid: Option<u32>,
pub comm: Option<String>, // process command name
pub uid: Option<u32>,
pub gid: Option<u32>,
}
#[derive(Debug, Clone, Serialize)]
pub enum LogLevel {
Emergency = 0,
Alert = 1,
Critical = 2,
Error = 3,
Warning = 4,
Notice = 5,
Info = 6,
Debug = 7,
}
#[derive(Debug, Clone, Serialize)]
pub struct JournalResponse {
pub entries: Vec<JournalEntry>,
pub total_count: u32,
pub truncated: bool,
pub cached_at: u64, // Unix timestamp when this data was cached
}

View File

@ -1,61 +1,199 @@
//! WebSocket upgrade and per-connection handler. Serves cached JSON quickly.
//! WebSocket upgrade and per-connection handler (request-driven).
use axum::{
extract::{
ws::{Message, WebSocket, WebSocketUpgrade},
Query, State,
},
http::StatusCode,
response::{IntoResponse, Response},
extract::ws::{Message, WebSocket},
extract::{Query, State, WebSocketUpgrade},
response::Response,
};
use futures_util::stream::StreamExt;
use flate2::{Compression, write::GzEncoder};
use futures_util::StreamExt;
use once_cell::sync::OnceCell;
use std::collections::HashMap;
use std::io::Write;
use tokio::sync::Mutex;
use crate::metrics::collect_metrics;
use crate::metrics::{collect_disks, collect_fast_metrics, collect_processes_all};
use crate::proto::pb;
use crate::state::AppState;
use std::collections::HashMap;
use std::sync::atomic::Ordering;
// Compression threshold based on typical payload size
// Temporarily increased for testing - revert to 768 for production
//const COMPRESSION_THRESHOLD: usize = 50_000;
const COMPRESSION_THRESHOLD: usize = 768;
// Reusable buffer for compression to avoid allocations
struct CompressionCache {
processes_vec: Vec<pb::Process>,
}
impl CompressionCache {
fn new() -> Self {
Self {
processes_vec: Vec::with_capacity(512), // Typical process count
}
}
}
static COMPRESSION_CACHE: OnceCell<Mutex<CompressionCache>> = OnceCell::new();
pub async fn ws_handler(
ws: WebSocketUpgrade,
State(state): State<AppState>,
Query(q): Query<HashMap<String, String>>,
) -> Response {
if let Some(expected) = state.auth_token.as_ref() {
match q.get("token") {
Some(t) if t == expected => {}
_ => return StatusCode::UNAUTHORIZED.into_response(),
}
// optional auth
if let Some(expected) = state.auth_token.as_ref()
&& q.get("token") != Some(expected)
{
return ws.on_upgrade(|socket| async move {
let _ = socket.close().await;
});
}
ws.on_upgrade(move |socket| handle_socket(socket, state))
}
async fn handle_socket(mut socket: WebSocket, state: AppState) {
// Bump client count on connect and wake the sampler.
state.client_count.fetch_add(1, Ordering::Relaxed);
state.wake_sampler.notify_waiters();
// Ensure we decrement on disconnect (drop).
struct ClientGuard(AppState);
impl Drop for ClientGuard {
fn drop(&mut self) {
self.0.client_count.fetch_sub(1, Ordering::Relaxed);
self.0.wake_sampler.notify_waiters();
}
}
let _guard = ClientGuard(state.clone());
state
.client_count
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
while let Some(Ok(msg)) = socket.next().await {
match msg {
Message::Text(text) if text == "get_metrics" => {
// Serve the cached JSON quickly; if empty (cold start), collect once.
let cached = state.last_json.read().await.clone();
if !cached.is_empty() {
let _ = socket.send(Message::Text(cached)).await;
Message::Text(ref text) if text == "get_metrics" => {
let m = collect_fast_metrics(&state).await;
let _ = send_json(&mut socket, &m).await;
}
Message::Text(ref text) if text == "get_disks" => {
let d = collect_disks(&state).await;
let _ = send_json(&mut socket, &d).await;
}
Message::Text(ref text) if text == "get_processes" => {
let payload = collect_processes_all(&state).await;
// Map to protobuf message
// Get cached buffers
let cache = COMPRESSION_CACHE.get_or_init(|| Mutex::new(CompressionCache::new()));
let mut cache = cache.lock().await;
// Reuse process vector to build the list
cache.processes_vec.clear();
cache
.processes_vec
.extend(payload.top_processes.into_iter().map(|p| pb::Process {
pid: p.pid,
name: p.name,
cpu_usage: p.cpu_usage,
mem_bytes: p.mem_bytes,
}));
let pb = pb::Processes {
process_count: payload.process_count as u64,
rows: std::mem::take(&mut cache.processes_vec),
};
let mut buf = Vec::with_capacity(8 * 1024);
if prost::Message::encode(&pb, &mut buf).is_err() {
let _ = socket.send(Message::Close(None)).await;
} else {
let metrics = collect_metrics(&state).await;
if let Ok(js) = serde_json::to_string(&metrics) {
let _ = socket.send(Message::Text(js)).await;
// compress if large
if buf.len() <= COMPRESSION_THRESHOLD {
let _ = socket.send(Message::Binary(buf)).await;
} else {
// Create a new encoder for each message to ensure proper gzip headers
let mut encoder =
GzEncoder::new(Vec::with_capacity(buf.len()), Compression::fast());
match encoder.write_all(&buf).and_then(|_| encoder.finish()) {
Ok(compressed) => {
let _ = socket.send(Message::Binary(compressed)).await;
}
Err(_) => {
let _ = socket.send(Message::Binary(buf)).await;
}
}
}
}
drop(cache); // Explicit drop to release mutex early
}
Message::Text(ref text) if text.starts_with("get_process_metrics:") => {
if let Some(pid_str) = text.strip_prefix("get_process_metrics:")
&& let Ok(pid) = pid_str.parse::<u32>()
{
let ttl = std::time::Duration::from_millis(250); // 250ms TTL
// Check cache first
{
let cache = state.cache_process_metrics.lock().await;
if let Some(entry) = cache.get(&pid)
&& entry.is_fresh(ttl)
&& let Some(cached_response) = entry.get()
{
let _ = send_json(&mut socket, cached_response).await;
continue;
}
}
// Collect fresh data
match crate::metrics::collect_process_metrics(pid, &state).await {
Ok(response) => {
// Cache the response
{
let mut cache = state.cache_process_metrics.lock().await;
cache
.entry(pid)
.or_insert_with(crate::state::CacheEntry::new)
.set(response.clone());
}
let _ = send_json(&mut socket, &response).await;
}
Err(err) => {
let error_response = serde_json::json!({
"error": err,
"request": "get_process_metrics",
"pid": pid
});
let _ = send_json(&mut socket, &error_response).await;
}
}
}
}
Message::Text(ref text) if text.starts_with("get_journal_entries:") => {
if let Some(pid_str) = text.strip_prefix("get_journal_entries:")
&& let Ok(pid) = pid_str.parse::<u32>()
{
let ttl = std::time::Duration::from_secs(1); // 1s TTL
// Check cache first
{
let cache = state.cache_journal_entries.lock().await;
if let Some(entry) = cache.get(&pid)
&& entry.is_fresh(ttl)
&& let Some(cached_response) = entry.get()
{
let _ = send_json(&mut socket, cached_response).await;
continue;
}
}
// Collect fresh data
match crate::metrics::collect_journal_entries(pid) {
Ok(response) => {
// Cache the response
{
let mut cache = state.cache_journal_entries.lock().await;
cache
.entry(pid)
.or_insert_with(crate::state::CacheEntry::new)
.set(response.clone());
}
let _ = send_json(&mut socket, &response).await;
}
Err(err) => {
let error_response = serde_json::json!({
"error": err,
"request": "get_journal_entries",
"pid": pid
});
let _ = send_json(&mut socket, &error_response).await;
}
}
}
}
@ -63,4 +201,99 @@ async fn handle_socket(mut socket: WebSocket, state: AppState) {
_ => {}
}
}
}
state
.client_count
.fetch_sub(1, std::sync::atomic::Ordering::Relaxed);
}
// Small, cheap gzip for larger payloads; send text for small.
async fn send_json<T: serde::Serialize>(ws: &mut WebSocket, value: &T) -> Result<(), axum::Error> {
let json = serde_json::to_string(value).expect("serialize");
if json.len() <= COMPRESSION_THRESHOLD {
return ws.send(Message::Text(json)).await;
}
let mut enc = GzEncoder::new(Vec::new(), Compression::fast());
enc.write_all(json.as_bytes()).ok();
let bin = enc.finish().unwrap_or_else(|_| json.into_bytes());
ws.send(Message::Binary(bin)).await
}
#[cfg(test)]
mod tests {
use super::*;
use prost::Message as ProstMessage;
use sysinfo::System;
#[tokio::test]
async fn test_process_list_not_empty() {
// Initialize system data first to ensure we have processes
let mut sys = System::new_all();
sys.refresh_all();
// Create state and put the refreshed system in it
let state = AppState::new();
{
let mut sys_lock = state.sys.lock().await;
*sys_lock = sys;
}
// Get processes directly using the collection function
let processes = collect_processes_all(&state).await;
// Convert to protobuf message format
let cache = COMPRESSION_CACHE.get_or_init(|| Mutex::new(CompressionCache::new()));
let mut cache = cache.lock().await;
// Reuse process vector to build the list
cache.processes_vec.clear();
cache
.processes_vec
.extend(processes.top_processes.into_iter().map(|p| pb::Process {
pid: p.pid,
name: p.name,
cpu_usage: p.cpu_usage,
mem_bytes: p.mem_bytes,
}));
// Create the protobuf message
let pb = pb::Processes {
process_count: processes.process_count as u64,
rows: cache.processes_vec.clone(),
};
// Test protobuf encoding/decoding
let mut buf = Vec::new();
prost::Message::encode(&pb, &mut buf).expect("Failed to encode protobuf");
let decoded = pb::Processes::decode(buf.as_slice()).expect("Failed to decode protobuf");
// Print debug info
println!("Process count: {}", pb.process_count);
println!("Process vector length: {}", pb.rows.len());
println!("Encoded size: {} bytes", buf.len());
println!("Decoded process count: {}", decoded.rows.len());
// Print first few processes if available
for (i, process) in pb.rows.iter().take(5).enumerate() {
println!(
"Process {}: {} (PID: {}) CPU: {:.1}% MEM: {} bytes",
i + 1,
process.name,
process.pid,
process.cpu_usage,
process.mem_bytes
);
}
// Validate
assert!(!pb.rows.is_empty(), "Process list should not be empty");
assert!(
pb.process_count > 0,
"Process count should be greater than 0"
);
assert_eq!(
pb.process_count as usize,
pb.rows.len(),
"Process count mismatch with actual rows"
);
}
}

View File

@ -0,0 +1,132 @@
//! Tests for the process cache functionality
use socktop_agent::state::{AppState, CacheEntry};
use socktop_agent::types::{DetailedProcessInfo, JournalResponse, ProcessMetricsResponse};
use std::time::Duration;
use tokio::time::sleep;
#[tokio::test]
async fn test_process_cache_ttl() {
let state = AppState::new();
let pid = 12345;
// Create mock data
let process_info = DetailedProcessInfo {
pid,
name: "test_process".to_string(),
command: "test command".to_string(),
cpu_usage: 50.0,
mem_bytes: 1024 * 1024,
virtual_mem_bytes: 2048 * 1024,
shared_mem_bytes: Some(512 * 1024),
thread_count: 4,
fd_count: Some(10),
status: "Running".to_string(),
parent_pid: Some(1),
user_id: 1000,
group_id: 1000,
start_time: 1234567890,
cpu_time_user: 100000,
cpu_time_system: 50000,
read_bytes: Some(1024),
write_bytes: Some(2048),
working_directory: Some("/tmp".to_string()),
executable_path: Some("/usr/bin/test".to_string()),
child_processes: vec![],
threads: vec![],
};
let metrics_response = ProcessMetricsResponse {
process: process_info,
cached_at: 1234567890,
};
let journal_response = JournalResponse {
entries: vec![],
total_count: 0,
truncated: false,
cached_at: 1234567890,
};
// Test process metrics caching
{
let mut cache = state.cache_process_metrics.lock().await;
cache
.entry(pid)
.or_insert_with(CacheEntry::new)
.set(metrics_response.clone());
}
// Should get cached value immediately
{
let cache = state.cache_process_metrics.lock().await;
let ttl = Duration::from_millis(250);
if let Some(entry) = cache.get(&pid) {
assert!(entry.is_fresh(ttl));
assert!(entry.get().is_some());
assert_eq!(entry.get().unwrap().process.pid, pid);
} else {
panic!("Expected cached entry");
}
}
println!("✓ Process metrics cached and retrieved successfully");
// Test journal entries caching
{
let mut cache = state.cache_journal_entries.lock().await;
cache
.entry(pid)
.or_insert_with(CacheEntry::new)
.set(journal_response.clone());
}
// Should get cached value immediately
{
let cache = state.cache_journal_entries.lock().await;
let ttl = Duration::from_secs(1);
if let Some(entry) = cache.get(&pid) {
assert!(entry.is_fresh(ttl));
assert!(entry.get().is_some());
assert_eq!(entry.get().unwrap().total_count, 0);
} else {
panic!("Expected cached entry");
}
}
println!("✓ Journal entries cached and retrieved successfully");
// Wait for process metrics to expire (250ms + buffer)
sleep(Duration::from_millis(300)).await;
// Process metrics should be expired now
{
let cache = state.cache_process_metrics.lock().await;
let ttl = Duration::from_millis(250);
if let Some(entry) = cache.get(&pid) {
assert!(!entry.is_fresh(ttl));
}
}
println!("✓ Process metrics correctly expired after TTL");
// Journal entries should still be valid (1s TTL)
{
let cache = state.cache_journal_entries.lock().await;
let ttl = Duration::from_secs(1);
if let Some(entry) = cache.get(&pid) {
assert!(entry.is_fresh(ttl));
}
}
println!("✓ Journal entries still valid within TTL");
// Wait for journal entries to expire (additional 800ms to reach 1s total)
sleep(Duration::from_millis(800)).await;
// Journal entries should be expired now
{
let cache = state.cache_journal_entries.lock().await;
let ttl = Duration::from_secs(1);
if let Some(entry) = cache.get(&pid) {
assert!(!entry.is_fresh(ttl));
}
}
println!("✓ Journal entries correctly expired after TTL");
}

View File

@ -0,0 +1,28 @@
//! CLI arg parsing tests for socktop_agent (server)
use std::process::Command;
#[test]
fn test_help_and_port_short_long() {
// We verify port flags are accepted by ensuring the process starts (then we kill quickly).
// Use an unlikely port to avoid conflicts.
let exe = env!("CARGO_BIN_EXE_socktop_agent");
// TLS enabled with long --port
let mut child = Command::new(exe)
.args(["--enableSSL", "--port", "9555"])
.spawn()
.expect("spawn agent");
// Give it a moment to bind
std::thread::sleep(std::time::Duration::from_millis(150));
let _ = child.kill();
let _ = child.wait();
// TLS enabled with short -p
let mut child2 = Command::new(exe)
.args(["--enableSSL", "-p", "9556"])
.spawn()
.expect("spawn agent");
std::thread::sleep(std::time::Duration::from_millis(150));
let _ = child2.kill();
let _ = child2.wait();
}

View File

@ -0,0 +1,40 @@
//! Unit test for port parsing logic moved out of `main.rs`.
fn parse_port<I: IntoIterator<Item = String>>(args: I, default_port: u16) -> u16 {
let mut it = args.into_iter();
let _ = it.next(); // program name
let mut long: Option<String> = None;
let mut short: Option<String> = None;
while let Some(a) = it.next() {
match a.as_str() {
"--port" => long = it.next(),
"-p" => short = it.next(),
_ if a.starts_with("--port=") => {
if let Some((_, v)) = a.split_once('=') {
long = Some(v.to_string());
}
}
_ => {}
}
}
long.or(short)
.and_then(|s| s.parse::<u16>().ok())
.unwrap_or(default_port)
}
#[test]
fn port_long_short_and_assign() {
assert_eq!(
parse_port(vec!["agent".into(), "--port".into(), "9001".into()], 8443),
9001
);
assert_eq!(
parse_port(vec!["agent".into(), "-p".into(), "9002".into()], 8443),
9002
);
assert_eq!(
parse_port(vec!["agent".into(), "--port=9003".into()], 8443),
9003
);
assert_eq!(parse_port(vec!["agent".into()], 8443), 8443);
}

View File

@ -0,0 +1,89 @@
//! Tests for process detail collection functionality
use socktop_agent::metrics::{collect_journal_entries, collect_process_metrics};
use socktop_agent::state::AppState;
use std::process;
#[tokio::test]
async fn test_collect_process_metrics_self() {
// Test collecting metrics for our own process
let pid = process::id();
let state = AppState::new();
match collect_process_metrics(pid, &state).await {
Ok(response) => {
assert_eq!(response.process.pid, pid);
assert!(!response.process.name.is_empty());
// Command might be empty on some systems, so don't assert on it
assert!(response.cached_at > 0);
println!(
"✓ Process metrics collected for PID {}: {} ({})",
pid, response.process.name, response.process.command
);
}
Err(e) => {
// This might fail if sysinfo can't find the process, which is possible
println!("⚠ Warning: Failed to collect process metrics for self: {e}");
}
}
}
#[tokio::test]
async fn test_collect_journal_entries_self() {
// Test collecting journal entries for our own process
let pid = process::id();
match collect_journal_entries(pid) {
Ok(response) => {
assert!(response.cached_at > 0);
println!(
"✓ Journal entries collected for PID {}: {} entries",
pid, response.total_count
);
if !response.entries.is_empty() {
let entry = &response.entries[0];
println!(" Latest entry: {}", entry.message);
}
}
Err(e) => {
// This might fail if journalctl is not available or restricted
println!("⚠ Warning: Failed to collect journal entries for self: {e}");
}
}
}
#[tokio::test]
async fn test_collect_process_metrics_invalid_pid() {
// Test with an invalid PID
let invalid_pid = 999999;
let state = AppState::new();
match collect_process_metrics(invalid_pid, &state).await {
Ok(_) => {
println!("⚠ Warning: Unexpectedly found process for invalid PID {invalid_pid}");
}
Err(e) => {
println!("✓ Correctly failed for invalid PID {invalid_pid}: {e}");
assert!(e.contains("not found"));
}
}
}
#[tokio::test]
async fn test_collect_journal_entries_invalid_pid() {
// Test with an invalid PID - journalctl might still return empty results
let invalid_pid = 999999;
match collect_journal_entries(invalid_pid) {
Ok(response) => {
println!(
"✓ Journal query completed for invalid PID {} (empty result expected): {} entries",
invalid_pid, response.total_count
);
// Should be empty or very few entries
}
Err(e) => {
println!("✓ Journal query failed for invalid PID {invalid_pid}: {e}");
}
}
}

View File

@ -0,0 +1,58 @@
use std::fs;
use std::path::PathBuf;
use std::process::Command;
use std::time::Duration;
use std::time::Instant;
fn expected_paths(config_home: &std::path::Path) -> (PathBuf, PathBuf) {
let base = config_home.join("socktop_agent").join("tls");
(base.join("cert.pem"), base.join("key.pem"))
}
#[test]
fn generates_self_signed_cert_and_key_in_xdg_path() {
// Create an isolated fake XDG_CONFIG_HOME
let tmpdir = tempfile::tempdir().expect("tempdir");
let xdg = tmpdir.path().to_path_buf();
// Run the agent once with --enableSSL, short timeout so it exits quickly when killed
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("socktop_agent"));
// Bind to an ephemeral port (-p 0) to avoid conflicts/flakes
cmd.env("XDG_CONFIG_HOME", &xdg)
.arg("--enableSSL")
.arg("-p")
.arg("0");
// Spawn the process and poll for cert generation
let mut child = cmd.spawn().expect("spawn agent");
// Poll up to ~3s for files to appear to avoid timing flakes
let (cert_path, key_path) = expected_paths(&xdg);
let start = Instant::now();
let timeout = Duration::from_millis(3000);
let interval = Duration::from_millis(50);
while start.elapsed() < timeout {
if cert_path.exists() && key_path.exists() {
break;
}
std::thread::sleep(interval);
}
// Terminate the process regardless
let _ = child.kill();
let _ = child.wait();
// Verify files exist at expected paths
assert!(
cert_path.exists(),
"cert not found at {}",
cert_path.display()
);
assert!(key_path.exists(), "key not found at {}", key_path.display());
// Also ensure they are non-empty
let cert_md = fs::metadata(&cert_path).expect("cert metadata");
let key_md = fs::metadata(&key_path).expect("key metadata");
assert!(cert_md.len() > 0, "cert is empty");
assert!(key_md.len() > 0, "key is empty");
}

View File

@ -0,0 +1,60 @@
[package]
name = "socktop_connector"
version = "1.50.0"
edition = "2024"
license = "MIT"
description = "WebSocket connector library for socktop agent communication"
authors = ["Jason Witty <jasonpwitty+socktop@proton.me>"]
repository = "https://github.com/jasonwitty/socktop"
readme = "README.md"
keywords = ["monitoring", "websocket", "metrics", "system"]
categories = ["network-programming", "development-tools"]
documentation = "https://docs.rs/socktop_connector"
[lib]
crate-type = ["cdylib", "rlib"]
# docs.rs specific metadata
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
# WebSocket client - only for non-WASM targets
tokio-tungstenite = { workspace = true, optional = true }
tokio = { workspace = true, optional = true }
futures-util = { workspace = true, optional = true }
url = { workspace = true, optional = true }
# WASM WebSocket support
wasm-bindgen = { version = "0.2", optional = true }
wasm-bindgen-futures = { version = "0.4", optional = true }
js-sys = { version = "0.3", optional = true }
web-sys = { version = "0.3", features = ["WebSocket", "MessageEvent", "ErrorEvent", "CloseEvent", "BinaryType", "Window", "console"], optional = true }
# TLS support
rustls = { version = "0.23", features = ["ring"], optional = true }
rustls-pemfile = { version = "2.1", optional = true }
# Serialization - always available
serde = { workspace = true }
serde_json = { workspace = true }
# Compression - used in both networking and WASM modes
flate2 = "1.0"
# Protobuf - always available
prost = { workspace = true }
# Error handling - always available
thiserror = "2.0"
[build-dependencies]
prost-build = "0.13"
protoc-bin-vendored = "3.0"
[features]
default = ["networking", "tls"]
networking = ["tokio-tungstenite", "tokio", "futures-util", "url"]
tls = ["networking", "rustls", "rustls-pemfile"]
wasm = ["wasm-bindgen", "wasm-bindgen-futures", "js-sys", "web-sys"] # WASM-compatible networking with compression

21
socktop_connector/LICENSE Normal file
View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025 Jason Witty
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

486
socktop_connector/README.md Normal file
View File

@ -0,0 +1,486 @@
# socktop_connector
A WebSocket connector library for communicating with socktop agents.
## Overview
`socktop_connector` provides a high-level, type-safe interface for connecting to socktop agents over WebSocket connections. It handles connection management, TLS certificate pinning, compression, and protocol buffer decoding automatically.
The library is designed for professional use with structured error handling that allows you to pattern match on specific error types, making it easy to implement robust error recovery and monitoring strategies.
## Features
- **WebSocket Communication**: Support for both `ws://` and `wss://` connections
- **TLS Security**: Certificate pinning for secure connections with self-signed certificates
- **Hostname Verification**: Configurable hostname verification for TLS connections
- **Type Safety**: Strongly typed requests and responses
- **Automatic Compression**: Handles gzip compression/decompression transparently
- **Protocol Buffer Support**: Decodes binary process data automatically
- **Error Handling**: Comprehensive error handling with structured error types for pattern matching
## Connection Types
### Non-TLS Connections (`ws://`)
Use `connect_to_socktop_agent()` for unencrypted WebSocket connections.
### TLS Connections (`wss://`)
Use `connect_to_socktop_agent_with_tls()` for encrypted connections with certificate pinning. You can control hostname verification with the `verify_hostname` parameter.
## Quick Start
Add this to your `Cargo.toml`:
```toml
[dependencies]
socktop_connector = "0.1.5"
tokio = { version = "1", features = ["rt", "rt-multi-thread", "net", "time", "macros"] }
```
### Basic Usage
```rust
use socktop_connector::{connect_to_socktop_agent, AgentRequest, AgentResponse};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Connect to a socktop agent (non-TLS connections are always unverified)
let mut connector = connect_to_socktop_agent("ws://localhost:3000/ws").await?;
// Request metrics
match connector.request(AgentRequest::Metrics).await? {
AgentResponse::Metrics(metrics) => {
println!("CPU: {}%, Memory: {}/{}MB",
metrics.cpu_total,
metrics.mem_used / 1024 / 1024,
metrics.mem_total / 1024 / 1024
);
}
_ => unreachable!(),
}
// Request process list
match connector.request(AgentRequest::Processes).await? {
AgentResponse::Processes(processes) => {
println!("Total processes: {}", processes.process_count);
for process in processes.top_processes.iter().take(5) {
println!(" {} (PID: {}) - CPU: {}%",
process.name, process.pid, process.cpu_usage);
}
}
_ => unreachable!(),
}
Ok(())
}
```
### Error Handling with Pattern Matching
Take advantage of structured error types for robust error handling:
```rust
use socktop_connector::{connect_to_socktop_agent, ConnectorError, AgentRequest};
#[tokio::main]
async fn main() {
// Handle connection errors specifically
let mut connector = match connect_to_socktop_agent("ws://localhost:3000/ws").await {
Ok(conn) => conn,
Err(ConnectorError::WebSocketError(e)) => {
eprintln!("Failed to connect to WebSocket: {}", e);
return;
}
Err(ConnectorError::UrlError(e)) => {
eprintln!("Invalid URL provided: {}", e);
return;
}
Err(e) => {
eprintln!("Connection failed: {}", e);
return;
}
};
// Handle request errors specifically
match connector.request(AgentRequest::Metrics).await {
Ok(response) => println!("Success: {:?}", response),
Err(ConnectorError::JsonError(e)) => {
eprintln!("Failed to parse server response: {}", e);
}
Err(ConnectorError::WebSocketError(e)) => {
eprintln!("Communication error: {}", e);
}
Err(e) => eprintln!("Request failed: {}", e),
}
}
```
### TLS with Certificate Pinning
```rust
use socktop_connector::{connect_to_socktop_agent_with_tls, AgentRequest};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Connect with TLS certificate pinning and hostname verification
let mut connector = connect_to_socktop_agent_with_tls(
"wss://remote-host:8443/ws",
"/path/to/cert.pem",
false // Enable hostname verification
).await?;
let response = connector.request(AgentRequest::Disks).await?;
println!("Got disk info: {:?}", response);
Ok(())
}
```
### Advanced Configuration
```rust
use socktop_connector::{ConnectorConfig, SocktopConnector, AgentRequest};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a custom configuration
let config = ConnectorConfig::new("wss://remote-host:8443/ws")
.with_tls_ca("/path/to/cert.pem")
.with_hostname_verification(false);
// Create and connect
let mut connector = SocktopConnector::new(config);
connector.connect().await?;
// Make requests
let response = connector.request(AgentRequest::Metrics).await?;
// Clean disconnect
connector.disconnect().await?;
Ok(())
}
```
### WebSocket Protocol Configuration
For version compatibility (if applies), you can configure WebSocket protocol version and sub-protocols:
```rust
use socktop_connector::{ConnectorConfig, SocktopConnector, connect_to_socktop_agent_with_config};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Method 1: Using the convenience function
let connector = connect_to_socktop_agent_with_config(
"ws://localhost:3000/ws",
Some(vec!["socktop".to_string(), "v1".to_string()]), // Sub-protocols
Some("13".to_string()), // WebSocket version (13 is standard)
).await?;
// Method 2: Using ConnectorConfig builder
let config = ConnectorConfig::new("ws://localhost:3000/ws")
.with_protocols(vec!["socktop".to_string()])
.with_version("13");
let mut connector = SocktopConnector::new(config);
connector.connect().await?;
Ok(())
}
```
**Note:** WebSocket version 13 is the current standard and is used by default. The sub-protocols feature is useful for protocol negotiation with servers that support multiple protocols.
## Continuous Updates
The socktop agent provides real-time system metrics. Each request returns the current snapshot, but you can implement continuous monitoring by making requests in a loop:
```rust
use socktop_connector::{connect_to_socktop_agent, AgentRequest, AgentResponse, ConnectorError};
use tokio::time::{sleep, Duration};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut connector = connect_to_socktop_agent("ws://localhost:3000/ws").await?;
// Monitor system metrics every 2 seconds
loop {
match connector.request(AgentRequest::Metrics).await {
Ok(AgentResponse::Metrics(metrics)) => {
// Calculate total network activity across all interfaces
let total_rx: u64 = metrics.networks.iter().map(|n| n.received).sum();
let total_tx: u64 = metrics.networks.iter().map(|n| n.transmitted).sum();
println!("CPU: {:.1}%, Memory: {:.1}%, Network: ↓{} ↑{}",
metrics.cpu_total,
(metrics.mem_used as f64 / metrics.mem_total as f64) * 100.0,
format_bytes(total_rx),
format_bytes(total_tx)
);
}
Err(e) => {
eprintln!("Error getting metrics: {}", e);
// You can pattern match on specific error types for different handling
match e {
socktop_connector::ConnectorError::WebSocketError(_) => {
eprintln!("Connection lost, attempting to reconnect...");
// Implement reconnection logic here
break;
}
socktop_connector::ConnectorError::JsonError(_) => {
eprintln!("Data parsing error, continuing...");
// Continue with next iteration for transient parsing errors
}
_ => {
eprintln!("Other error, stopping monitoring");
break;
}
}
}
_ => unreachable!(),
}
sleep(Duration::from_secs(2)).await;
}
Ok(())
}
fn format_bytes(bytes: u64) -> String {
const UNITS: &[&str] = &["B", "KB", "MB", "GB"];
let mut size = bytes as f64;
let mut unit_index = 0;
while size >= 1024.0 && unit_index < UNITS.len() - 1 {
size /= 1024.0;
unit_index += 1;
}
format!("{:.1}{}", size, UNITS[unit_index])
}
```
### Understanding Data Freshness
The socktop agent implements intelligent caching to avoid overwhelming the system:
- **Metrics**: Cached for ~250ms by default (cheap / fast-changing data like CPU, memory)
- **Processes**: Cached for ~1500ms by default (exppensive / moderately changing data)
- **Disks**: Cached for ~1000ms by default (cheap / slowly changing data)
These values have been generally tuned in advance. You should not need to override them. The reason for this cache is for the use case that multiple clients are requesting data. In general a single client should never really hit a cached response since the polling rates are slower that the cache intervals. Cache intervals have been tuned based on how much work the agent has to do in the case of reloading fresh data.
This means:
1. **Multiple rapid requests** for the same data type will return cached results
2. **Different data types** have independent cache timers
3. **Fresh data** is automatically retrieved when cache expires
```rust
use socktop_connector::{connect_to_socktop_agent, AgentRequest, AgentResponse};
use tokio::time::{sleep, Duration};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut connector = connect_to_socktop_agent("ws://localhost:3000/ws").await?;
// This demonstrates cache behavior
println!("Requesting metrics twice quickly...");
// First request - fresh data from system
let start = std::time::Instant::now();
connector.request(AgentRequest::Metrics).await?;
println!("First request took: {:?}", start.elapsed());
// Second request immediately - cached data
let start = std::time::Instant::now();
connector.request(AgentRequest::Metrics).await?;
println!("Second request took: {:?}", start.elapsed()); // Much faster!
// Wait for cache to expire, then request again
sleep(Duration::from_millis(300)).await;
let start = std::time::Instant::now();
connector.request(AgentRequest::Metrics).await?;
println!("Third request (after cache expiry): {:?}", start.elapsed());
Ok(())
}
```
The WebSocket connection remains open between requests, providing efficient real-time monitoring without connection overhead.
## Request Types
The library supports three types of requests:
- `AgentRequest::Metrics` - Get current system metrics (CPU, memory, network, etc.)
- `AgentRequest::Disks` - Get disk usage information
- `AgentRequest::Processes` - Get running process information
## Response Types
Responses are automatically parsed into strongly-typed structures:
- `AgentResponse::Metrics(Metrics)` - System metrics with CPU, memory, network data
- `AgentResponse::Disks(Vec<DiskInfo>)` - List of disk usage information
- `AgentResponse::Processes(ProcessesPayload)` - Process list with CPU and memory usage
## Configuration Options
The library provides flexible configuration through the `ConnectorConfig` builder:
- `with_tls_ca(path)` - Enable TLS with certificate pinning
- `with_hostname_verification(bool)` - Control hostname verification for TLS connections
- `true` (recommended): Verify the server hostname matches the certificate
- `false`: Skip hostname verification (useful for localhost or IP-based connections)
- `with_protocols(Vec<String>)` - Set WebSocket sub-protocols for protocol negotiation
- `with_version(String)` - Set WebSocket protocol version (default is "13", the current standard)
**Note**: Hostname verification only applies to TLS connections (`wss://`). Non-TLS connections (`ws://`) don't use certificates, so hostname verification is not applicable.
## WASM Compatibility (experimental)
`socktop_connector` provides **full WebSocket support** for WebAssembly (WASM) environments, including complete networking functionality with automatic compression and protobuf decoding.
### Quick Setup
```toml
[dependencies]
socktop_connector = { version = "0.1.5", default-features = false, features = ["wasm"] }
wasm-bindgen = "0.2"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
```
### What Works
- ✅ Full WebSocket connectivity (`ws://` connections)
- ✅ All request types (`Metrics`, `Disks`, `Processes`)
- ✅ Automatic gzip decompression for metrics and disks
- ✅ Automatic protobuf decoding for process data
- ✅ All types (`ConnectorConfig`, `AgentRequest`, `AgentResponse`)
- ✅ JSON serialization/deserialization
- ✅ Protocol and version configuration
### What Doesn't Work
- ❌ TLS connections (`wss://`) - use `ws://` only
- ❌ TLS certificate handling
### Basic WASM Usage
```rust
use wasm_bindgen::prelude::*;
use socktop_connector::{ConnectorConfig, SocktopConnector, AgentRequest};
#[wasm_bindgen]
pub async fn test_connection() {
let config = ConnectorConfig::new("ws://localhost:3000/ws");
let mut connector = SocktopConnector::new(config);
match connector.connect().await {
Ok(()) => {
// Request metrics with automatic gzip decompression
let response = connector.request(AgentRequest::Metrics).await.unwrap();
console_log!("Got metrics: {:?}", response);
// Request processes with automatic protobuf decoding
let response = connector.request(AgentRequest::Processes).await.unwrap();
console_log!("Got processes: {:?}", response);
}
Err(e) => console_log!("Connection failed: {}", e),
}
}
```
### Complete WASM Guide
For detailed implementation examples, complete code samples, and a working test environment, see the **[WASM Compatibility Guide](../socktop_wasm_test/README.md)** in the `socktop_wasm_test/` directory.
## Security Considerations
- **Production TLS**: You can enable hostname verification (`verify_hostname: true`) for production systems, This will add an additional level of production of verifying the hostname against the certificate. Generally this is to stop a man in the middle attack, but since it will be the client who is fooled and not the server, the risk and likelyhood of this use case is rather low. Which is why this is disabled by default.
- **Certificate Pinning**: Use `with_tls_ca()` for self-signed certificates, the socktop agent will generate certificates on start. see main readme for more details.
- **Non-TLS**: Use only for development or trusted networks
## Environment Variables
Currently no environment variables are used. All configuration is done through the API.
## Error Handling
The library uses structured error types via `thiserror` for comprehensive error handling. You can pattern match on specific error types:
```rust
use socktop_connector::{connect_to_socktop_agent, ConnectorError, AgentRequest};
#[tokio::main]
async fn main() {
match connect_to_socktop_agent("invalid://url").await {
Ok(mut connector) => {
// Handle successful connection
match connector.request(AgentRequest::Metrics).await {
Ok(response) => println!("Got response: {:?}", response),
Err(ConnectorError::WebSocketError(e)) => {
eprintln!("WebSocket communication failed: {}", e);
}
Err(ConnectorError::JsonError(e)) => {
eprintln!("Failed to parse response: {}", e);
}
Err(e) => eprintln!("Other error: {}", e),
}
}
Err(ConnectorError::UrlError(e)) => {
eprintln!("Invalid URL: {}", e);
}
Err(ConnectorError::WebSocketError(e)) => {
eprintln!("Failed to connect: {}", e);
}
Err(ConnectorError::TlsError(msg)) => {
eprintln!("TLS error: {}", msg);
}
Err(e) => {
eprintln!("Connection failed: {}", e);
}
}
}
```
### Error Types
The `ConnectorError` enum provides specific variants for different error conditions:
- `ConnectorError::WebSocketError` - WebSocket connection or communication errors
- `ConnectorError::TlsError` - TLS-related errors (certificate validation, etc.)
- `ConnectorError::UrlError` - URL parsing errors
- `ConnectorError::JsonError` - JSON serialization/deserialization errors
- `ConnectorError::ProtocolError` - Protocol-level errors
- `ConnectorError::CompressionError` - Gzip compression/decompression errors
- `ConnectorError::IoError` - I/O errors
- `ConnectorError::Other` - Other errors with descriptive messages
All errors implement `std::error::Error` so they work seamlessly with `Box<dyn std::error::Error>`, `anyhow`, and other error handling crates.
### Migration from Generic Errors
If you were previously using the library with generic error handling, your existing code will continue to work:
```rust
// This continues to work as before
async fn my_function() -> Result<(), Box<dyn std::error::Error>> {
let mut connector = connect_to_socktop_agent("ws://localhost:3000/ws").await?;
let response = connector.request(AgentRequest::Metrics).await?;
Ok(())
}
// But now you can also use structured error handling for better control
async fn improved_function() -> Result<(), ConnectorError> {
let mut connector = connect_to_socktop_agent("ws://localhost:3000/ws").await?;
let response = connector.request(AgentRequest::Metrics).await?;
Ok(())
}
```
## License
MIT License - see the LICENSE file for details.

View File

@ -0,0 +1,10 @@
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Set the protoc binary path to use the vendored version for CI compatibility
// SAFETY: We're only setting PROTOC in a build script environment, which is safe
unsafe {
std::env::set_var("PROTOC", protoc_bin_vendored::protoc_bin_path()?);
}
prost_build::compile_protos(&["processes.proto"], &["."])?;
Ok(())
}

View File

@ -0,0 +1,38 @@
//! Example of using socktop_connector in a WASM environment.
//!
//! This example demonstrates how to use the connector without TLS dependencies
//! for WebAssembly builds.
use socktop_connector::{AgentRequest, ConnectorConfig, connect_to_socktop_agent};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("WASM-compatible socktop connector example");
// For WASM builds, use ws:// (not wss://) to avoid TLS dependencies
let url = "ws://localhost:3000/ws";
// Method 1: Simple connection (recommended for most use cases)
let mut connector = connect_to_socktop_agent(url).await?;
// Method 2: With custom WebSocket configuration
let config = ConnectorConfig::new(url)
.with_protocols(vec!["socktop".to_string()])
.with_version("13".to_string());
let mut connector_custom = socktop_connector::SocktopConnector::new(config);
connector_custom.connect().await?;
// Make a request to get metrics
match connector.request(AgentRequest::Metrics).await {
Ok(response) => {
println!("Successfully received response: {response:?}");
}
Err(e) => {
println!("Request failed: {e}");
}
}
println!("WASM example completed successfully!");
Ok(())
}

View File

@ -0,0 +1,15 @@
syntax = "proto3";
package socktop;
// All running processes. Sorting is done client-side.
message Processes {
uint64 process_count = 1; // total processes in the system
repeated Process rows = 2; // all processes
}
message Process {
uint32 pid = 1;
string name = 2;
float cpu_usage = 3; // 0..100
uint64 mem_bytes = 4; // RSS bytes
}

View File

@ -0,0 +1,48 @@
//! Configuration for socktop WebSocket connections.
/// Configuration for connecting to a socktop agent.
#[derive(Debug, Clone)]
pub struct ConnectorConfig {
pub url: String,
pub tls_ca_path: Option<String>,
pub verify_hostname: bool,
pub ws_protocols: Option<Vec<String>>,
pub ws_version: Option<String>,
}
impl ConnectorConfig {
/// Create a new connector configuration with the given URL.
pub fn new(url: impl Into<String>) -> Self {
Self {
url: url.into(),
tls_ca_path: None,
verify_hostname: false,
ws_protocols: None,
ws_version: None,
}
}
/// Set the path to a custom TLS CA certificate file.
pub fn with_tls_ca(mut self, ca_path: impl Into<String>) -> Self {
self.tls_ca_path = Some(ca_path.into());
self
}
/// Enable or disable hostname verification for TLS connections.
pub fn with_hostname_verification(mut self, verify: bool) -> Self {
self.verify_hostname = verify;
self
}
/// Set WebSocket sub-protocols to negotiate.
pub fn with_protocols(mut self, protocols: Vec<String>) -> Self {
self.ws_protocols = Some(protocols);
self
}
/// Set WebSocket protocol version (default is "13").
pub fn with_version(mut self, version: impl Into<String>) -> Self {
self.ws_version = Some(version.into());
self
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,276 @@
//! Modular SocktopConnector implementation using networking and WASM modules.
use crate::config::ConnectorConfig;
use crate::error::{ConnectorError, Result};
use crate::{AgentRequest, AgentResponse};
#[cfg(feature = "networking")]
use crate::networking::{
WsStream, connect_to_agent, request_disks, request_journal_entries, request_metrics,
request_process_metrics, request_processes,
};
#[cfg(all(feature = "wasm", not(feature = "networking")))]
use crate::wasm::{connect_to_agent, send_request_and_wait};
#[cfg(all(feature = "wasm", not(feature = "networking")))]
use crate::{DiskInfo, Metrics, ProcessesPayload};
#[cfg(all(feature = "wasm", not(feature = "networking")))]
use web_sys::WebSocket;
/// Main connector for communicating with socktop agents
pub struct SocktopConnector {
pub config: ConnectorConfig,
#[cfg(feature = "networking")]
stream: Option<WsStream>,
#[cfg(all(feature = "wasm", not(feature = "networking")))]
websocket: Option<WebSocket>,
}
impl SocktopConnector {
/// Create a new connector with the given configuration
pub fn new(config: ConnectorConfig) -> Self {
Self {
config,
#[cfg(feature = "networking")]
stream: None,
#[cfg(all(feature = "wasm", not(feature = "networking")))]
websocket: None,
}
}
}
#[cfg(feature = "networking")]
impl SocktopConnector {
/// Connect to the agent
pub async fn connect(&mut self) -> Result<()> {
let stream = connect_to_agent(&self.config).await?;
self.stream = Some(stream);
Ok(())
}
/// Send a request to the agent and get the response
pub async fn request(&mut self, request: AgentRequest) -> Result<AgentResponse> {
let stream = self.stream.as_mut().ok_or(ConnectorError::NotConnected)?;
match request {
AgentRequest::Metrics => {
let metrics = request_metrics(stream)
.await
.ok_or_else(|| ConnectorError::invalid_response("Failed to get metrics"))?;
Ok(AgentResponse::Metrics(metrics))
}
AgentRequest::Disks => {
let disks = request_disks(stream)
.await
.ok_or_else(|| ConnectorError::invalid_response("Failed to get disks"))?;
Ok(AgentResponse::Disks(disks))
}
AgentRequest::Processes => {
let processes = request_processes(stream)
.await
.ok_or_else(|| ConnectorError::invalid_response("Failed to get processes"))?;
Ok(AgentResponse::Processes(processes))
}
AgentRequest::ProcessMetrics { pid } => {
let process_metrics =
request_process_metrics(stream, pid).await.ok_or_else(|| {
ConnectorError::invalid_response("Failed to get process metrics")
})?;
Ok(AgentResponse::ProcessMetrics(process_metrics))
}
AgentRequest::JournalEntries { pid } => {
let journal_entries =
request_journal_entries(stream, pid).await.ok_or_else(|| {
ConnectorError::invalid_response("Failed to get journal entries")
})?;
Ok(AgentResponse::JournalEntries(journal_entries))
}
}
}
/// Check if the connector is connected
pub fn is_connected(&self) -> bool {
self.stream.is_some()
}
/// Disconnect from the agent
pub async fn disconnect(&mut self) -> Result<()> {
if let Some(mut stream) = self.stream.take() {
let _ = stream.close(None).await;
}
Ok(())
}
}
// WASM WebSocket implementation
#[cfg(all(feature = "wasm", not(feature = "networking")))]
impl SocktopConnector {
/// Connect to the agent using WASM WebSocket
pub async fn connect(&mut self) -> Result<()> {
let websocket = connect_to_agent(&self.config).await?;
self.websocket = Some(websocket);
Ok(())
}
/// Send a request to the agent and get the response
pub async fn request(&mut self, request: AgentRequest) -> Result<AgentResponse> {
let ws = self
.websocket
.as_ref()
.ok_or(ConnectorError::NotConnected)?;
send_request_and_wait(ws, request).await
}
/// Check if the connector is connected
pub fn is_connected(&self) -> bool {
use crate::utils::WEBSOCKET_OPEN;
self.websocket
.as_ref()
.is_some_and(|ws| ws.ready_state() == WEBSOCKET_OPEN)
}
/// Disconnect from the agent
pub async fn disconnect(&mut self) -> Result<()> {
if let Some(ws) = self.websocket.take() {
let _ = ws.close();
}
Ok(())
}
/// Request metrics from the agent
pub async fn get_metrics(&mut self) -> Result<Metrics> {
match self.request(AgentRequest::Metrics).await? {
AgentResponse::Metrics(metrics) => Ok(metrics),
_ => Err(ConnectorError::protocol_error(
"Unexpected response type for metrics",
)),
}
}
/// Request disk information from the agent
pub async fn get_disks(&mut self) -> Result<Vec<DiskInfo>> {
match self.request(AgentRequest::Disks).await? {
AgentResponse::Disks(disks) => Ok(disks),
_ => Err(ConnectorError::protocol_error(
"Unexpected response type for disks",
)),
}
}
/// Request process information from the agent
pub async fn get_processes(&mut self) -> Result<ProcessesPayload> {
match self.request(AgentRequest::Processes).await? {
AgentResponse::Processes(processes) => Ok(processes),
_ => Err(ConnectorError::protocol_error(
"Unexpected response type for processes",
)),
}
}
}
// Stub implementations when neither networking nor wasm is enabled
#[cfg(not(any(feature = "networking", feature = "wasm")))]
impl SocktopConnector {
/// Connect to the socktop agent endpoint.
///
/// Note: Networking functionality is disabled. Enable the "networking" feature to use this function.
pub async fn connect(&mut self) -> Result<()> {
Err(ConnectorError::protocol_error(
"Networking functionality disabled. Enable the 'networking' feature to connect to agents.",
))
}
/// Send a request to the agent and await a response.
///
/// Note: Networking functionality is disabled. Enable the "networking" feature to use this function.
pub async fn request(&mut self, _request: AgentRequest) -> Result<AgentResponse> {
Err(ConnectorError::protocol_error(
"Networking functionality disabled. Enable the 'networking' feature to send requests.",
))
}
/// Close the connection to the agent.
///
/// Note: Networking functionality is disabled. This is a no-op when networking is disabled.
pub async fn disconnect(&mut self) -> Result<()> {
Ok(()) // No-op when networking is disabled
}
}
/// Convenience function to create a connector and connect in one step.
///
/// This function is for non-TLS WebSocket connections (`ws://`). Since there's no
/// certificate involved, hostname verification is not applicable.
///
/// For TLS connections with certificate pinning, use `connect_to_socktop_agent_with_tls()`.
#[cfg(feature = "networking")]
pub async fn connect_to_socktop_agent(url: impl Into<String>) -> Result<SocktopConnector> {
let config = ConnectorConfig::new(url);
let mut connector = SocktopConnector::new(config);
connector.connect().await?;
Ok(connector)
}
/// Convenience function to create a connector with TLS and connect in one step.
///
/// This function enables TLS with certificate pinning using the provided CA certificate.
/// The `verify_hostname` parameter controls whether the server's hostname is verified
/// against the certificate (recommended for production, can be disabled for testing).
#[cfg(feature = "tls")]
#[cfg(feature = "networking")]
#[cfg_attr(docsrs, doc(cfg(feature = "tls")))]
pub async fn connect_to_socktop_agent_with_tls(
url: impl Into<String>,
ca_path: impl Into<String>,
verify_hostname: bool,
) -> Result<SocktopConnector> {
let config = ConnectorConfig::new(url)
.with_tls_ca(ca_path)
.with_hostname_verification(verify_hostname);
let mut connector = SocktopConnector::new(config);
connector.connect().await?;
Ok(connector)
}
/// Convenience function to create a connector with custom WebSocket protocol configuration.
///
/// This function allows you to specify WebSocket protocol version and sub-protocols.
/// Most users should use the simpler `connect_to_socktop_agent()` function instead.
///
/// # Example
/// ```no_run
/// use socktop_connector::connect_to_socktop_agent_with_config;
///
/// # #[tokio::main]
/// # async fn main() -> Result<(), Box<dyn std::error::Error>> {
/// let connector = connect_to_socktop_agent_with_config(
/// "ws://localhost:3000/ws",
/// Some(vec!["socktop".to_string()]), // WebSocket sub-protocols
/// Some("13".to_string()), // WebSocket version (13 is standard)
/// ).await?;
/// # Ok(())
/// # }
/// ```
#[cfg(feature = "networking")]
pub async fn connect_to_socktop_agent_with_config(
url: impl Into<String>,
protocols: Option<Vec<String>>,
version: Option<String>,
) -> Result<SocktopConnector> {
let mut config = ConnectorConfig::new(url);
if let Some(protocols) = protocols {
config = config.with_protocols(protocols);
}
if let Some(version) = version {
config = config.with_version(version);
}
let mut connector = SocktopConnector::new(config);
connector.connect().await?;
Ok(connector)
}

View File

@ -0,0 +1,155 @@
//! Error types for socktop_connector
use thiserror::Error;
/// Errors that can occur when using socktop_connector
#[derive(Error, Debug)]
pub enum ConnectorError {
/// WebSocket connection failed
#[cfg(feature = "networking")]
#[error("WebSocket connection failed: {source}")]
ConnectionFailed {
source: Box<tokio_tungstenite::tungstenite::Error>,
},
/// URL parsing error
#[cfg(feature = "networking")]
#[error("Invalid URL: {url}")]
InvalidUrl {
url: String,
#[source]
source: url::ParseError,
},
/// TLS certificate error
#[error("TLS certificate error: {message}")]
TlsError {
message: String,
#[source]
source: Box<dyn std::error::Error + Send + Sync>,
},
/// Certificate file not found or invalid
#[error("Certificate file error at '{path}': {message}")]
CertificateError { path: String, message: String },
/// Invalid server response format
#[error("Invalid response from server: {message}")]
InvalidResponse { message: String },
/// JSON parsing error
#[error("JSON parsing error: {source}")]
JsonError {
#[from]
source: serde_json::Error,
},
/// Request/response protocol error
#[error("Protocol error: {message}")]
ProtocolError { message: String },
/// Connection is not established
#[error("Not connected to server")]
NotConnected,
/// Connection was closed unexpectedly
#[error("Connection closed: {reason}")]
ConnectionClosed { reason: String },
/// IO error (network, file system, etc.)
#[error("IO error: {source}")]
IoError {
#[from]
source: std::io::Error,
},
/// Compression/decompression error
#[error("Compression error: {message}")]
CompressionError { message: String },
/// Protocol Buffer parsing error
#[error("Protocol buffer error: {source}")]
ProtobufError {
#[from]
source: prost::DecodeError,
},
}
/// Result type alias for connector operations
pub type Result<T> = std::result::Result<T, ConnectorError>;
impl ConnectorError {
/// Create a TLS error with context
pub fn tls_error(
message: impl Into<String>,
source: impl std::error::Error + Send + Sync + 'static,
) -> Self {
Self::TlsError {
message: message.into(),
source: Box::new(source),
}
}
/// Create a certificate error
pub fn certificate_error(path: impl Into<String>, message: impl Into<String>) -> Self {
Self::CertificateError {
path: path.into(),
message: message.into(),
}
}
/// Create a protocol error
pub fn protocol_error(message: impl Into<String>) -> Self {
Self::ProtocolError {
message: message.into(),
}
}
/// Create an invalid response error
pub fn invalid_response(message: impl Into<String>) -> Self {
Self::InvalidResponse {
message: message.into(),
}
}
/// Create a connection closed error
pub fn connection_closed(reason: impl Into<String>) -> Self {
Self::ConnectionClosed {
reason: reason.into(),
}
}
/// Create a compression error
pub fn compression_error(message: impl Into<String>) -> Self {
Self::CompressionError {
message: message.into(),
}
}
/// Create a serialization error (wraps JSON error)
pub fn serialization_error(message: impl Into<String>) -> Self {
Self::ProtocolError {
message: message.into(),
}
}
}
#[cfg(feature = "networking")]
impl From<url::ParseError> for ConnectorError {
fn from(source: url::ParseError) -> Self {
Self::InvalidUrl {
url: "unknown".to_string(), // We don't have the URL in the error context
source,
}
}
}
// Manual From implementation for boxed tungstenite errors
#[cfg(feature = "networking")]
impl From<tokio_tungstenite::tungstenite::Error> for ConnectorError {
fn from(source: tokio_tungstenite::tungstenite::Error) -> Self {
Self::ConnectionFailed {
source: Box::new(source),
}
}
}

View File

@ -0,0 +1,183 @@
//! WebSocket connector library for socktop agents.
//!
//! This library provides a high-level interface for connecting to socktop agents
//! over WebSocket connections with support for TLS and certificate pinning.
//!
//! # Quick Start
//!
//! ```no_run
//! use socktop_connector::{connect_to_socktop_agent, AgentRequest, AgentResponse};
//!
//! #[tokio::main]
//! async fn main() -> Result<(), Box<dyn std::error::Error>> {
//! let mut connector = connect_to_socktop_agent("ws://localhost:3000/ws").await?;
//!
//! // Get comprehensive system metrics
//! if let Ok(AgentResponse::Metrics(metrics)) = connector.request(AgentRequest::Metrics).await {
//! println!("Hostname: {}", metrics.hostname);
//! println!("CPU Usage: {:.1}%", metrics.cpu_total);
//!
//! // CPU temperature if available
//! if let Some(temp) = metrics.cpu_temp_c {
//! println!("CPU Temperature: {:.1}°C", temp);
//! }
//!
//! // Memory usage
//! println!("Memory: {:.1} GB / {:.1} GB",
//! metrics.mem_used as f64 / 1_000_000_000.0,
//! metrics.mem_total as f64 / 1_000_000_000.0);
//!
//! // Per-core CPU usage
//! for (i, usage) in metrics.cpu_per_core.iter().enumerate() {
//! println!("Core {}: {:.1}%", i, usage);
//! }
//!
//! // GPU information
//! if let Some(gpus) = &metrics.gpus {
//! for gpu in gpus {
//! if let Some(name) = &gpu.name {
//! println!("GPU {}: {:.1}% usage", name, gpu.utilization.unwrap_or(0.0));
//! if let Some(temp) = gpu.temp {
//! println!(" Temperature: {:.1}°C", temp);
//! }
//! }
//! }
//! }
//! }
//!
//! // Get process information
//! if let Ok(AgentResponse::Processes(processes)) = connector.request(AgentRequest::Processes).await {
//! println!("Running processes: {}", processes.process_count);
//! for proc in &processes.top_processes {
//! println!(" PID {}: {} ({:.1}% CPU, {:.1} MB RAM)",
//! proc.pid, proc.name, proc.cpu_usage, proc.mem_bytes as f64 / 1_000_000.0);
//! }
//! }
//!
//! // Get disk information
//! if let Ok(AgentResponse::Disks(disks)) = connector.request(AgentRequest::Disks).await {
//! for disk in disks {
//! let used_gb = (disk.total - disk.available) as f64 / 1_000_000_000.0;
//! let total_gb = disk.total as f64 / 1_000_000_000.0;
//! println!("Disk {}: {:.1} GB / {:.1} GB", disk.name, used_gb, total_gb);
//! }
//! }
//!
//! Ok(())
//! }
//! ```
//!
//! # TLS Support
//!
//! ```no_run
//! use socktop_connector::connect_to_socktop_agent_with_tls;
//!
//! # #[tokio::main]
//! # async fn main() -> Result<(), Box<dyn std::error::Error>> {
//! let connector = connect_to_socktop_agent_with_tls(
//! "wss://secure-host:3000/ws",
//! "/path/to/ca.pem",
//! false // Enable hostname verification
//! ).await?;
//! # Ok(())
//! # }
//! ```
//!
//! # Continuous Monitoring
//!
//! For real-time system monitoring, you can make requests in a loop. The agent
//! implements intelligent caching to avoid overwhelming the system:
//!
//! ```no_run
//! use socktop_connector::{connect_to_socktop_agent, AgentRequest, AgentResponse};
//! use tokio::time::{sleep, Duration};
//!
//! #[tokio::main]
//! async fn main() -> Result<(), Box<dyn std::error::Error>> {
//! let mut connector = connect_to_socktop_agent("ws://localhost:3000/ws").await?;
//!
//! // Monitor system metrics every 2 seconds
//! loop {
//! match connector.request(AgentRequest::Metrics).await {
//! Ok(AgentResponse::Metrics(metrics)) => {
//! // Calculate total network activity across all interfaces
//! let total_rx: u64 = metrics.networks.iter().map(|n| n.received).sum();
//! let total_tx: u64 = metrics.networks.iter().map(|n| n.transmitted).sum();
//!
//! println!("CPU: {:.1}%, Memory: {:.1}%, Network: ↓{} ↑{}",
//! metrics.cpu_total,
//! (metrics.mem_used as f64 / metrics.mem_total as f64) * 100.0,
//! format_bytes(total_rx),
//! format_bytes(total_tx)
//! );
//! }
//! Err(e) => {
//! eprintln!("Connection error: {}", e);
//! break;
//! }
//! _ => unreachable!(),
//! }
//!
//! sleep(Duration::from_secs(2)).await;
//! }
//!
//! Ok(())
//! }
//!
//! fn format_bytes(bytes: u64) -> String {
//! const UNITS: &[&str] = &["B", "KB", "MB", "GB"];
//! let mut size = bytes as f64;
//! let mut unit_index = 0;
//!
//! while size >= 1024.0 && unit_index < UNITS.len() - 1 {
//! size /= 1024.0;
//! unit_index += 1;
//! }
//!
//! format!("{:.1}{}", size, UNITS[unit_index])
//! }
//! ```
#![cfg_attr(docsrs, feature(doc_cfg))]
// Core modules
pub mod config;
pub mod error;
pub mod types;
pub mod utils;
// Implementation modules
#[cfg(feature = "networking")]
pub mod networking;
#[cfg(feature = "wasm")]
pub mod wasm;
// Main connector implementation
pub mod connector_impl;
// Re-export the main types
pub use config::ConnectorConfig;
pub use connector_impl::SocktopConnector;
pub use error::{ConnectorError, Result};
pub use types::{
AgentRequest, AgentResponse, DetailedProcessInfo, DiskInfo, GpuInfo, JournalEntry,
JournalResponse, LogLevel, Metrics, NetworkInfo, ProcessInfo, ProcessMetricsResponse,
ProcessesPayload,
};
// Re-export convenience functions
#[cfg(feature = "networking")]
pub use connector_impl::{connect_to_socktop_agent, connect_to_socktop_agent_with_config};
#[cfg(all(feature = "tls", feature = "networking"))]
pub use connector_impl::connect_to_socktop_agent_with_tls;
#[cfg(feature = "networking")]
pub use networking::WsStream;
// Protobuf types for internal use
#[cfg(any(feature = "networking", feature = "wasm"))]
pub mod pb {
include!(concat!(env!("OUT_DIR"), "/socktop.rs"));
}

View File

@ -0,0 +1,183 @@
//! WebSocket connection handling for native (non-WASM) environments.
use crate::config::ConnectorConfig;
use crate::error::{ConnectorError, Result};
use std::io::BufReader;
use std::sync::Arc;
use tokio_tungstenite::tungstenite::client::IntoClientRequest;
use tokio_tungstenite::{MaybeTlsStream, WebSocketStream, connect_async};
use url::Url;
#[cfg(feature = "tls")]
use {
rustls::{self, ClientConfig},
rustls::{
DigitallySignedStruct, RootCertStore, SignatureScheme,
client::danger::{HandshakeSignatureValid, ServerCertVerified, ServerCertVerifier},
crypto::ring,
pki_types::{CertificateDer, ServerName, UnixTime},
},
rustls_pemfile::Item,
std::fs::File,
tokio_tungstenite::Connector,
};
pub type WsStream = WebSocketStream<MaybeTlsStream<tokio::net::TcpStream>>;
/// Connect to the agent and return the WS stream
pub async fn connect_to_agent(config: &ConnectorConfig) -> Result<WsStream> {
#[cfg(feature = "tls")]
ensure_crypto_provider();
let mut u = Url::parse(&config.url)?;
if let Some(ca_path) = &config.tls_ca_path {
if u.scheme() == "ws" {
let _ = u.set_scheme("wss");
}
return connect_with_ca_and_config(u.as_str(), ca_path, config).await;
}
// No TLS - hostname verification is not applicable
connect_without_ca_and_config(u.as_str(), config).await
}
async fn connect_without_ca_and_config(url: &str, config: &ConnectorConfig) -> Result<WsStream> {
let mut req = url.into_client_request()?;
// Apply WebSocket protocol configuration
if let Some(version) = &config.ws_version {
req.headers_mut().insert(
"Sec-WebSocket-Version",
version
.parse()
.map_err(|_| ConnectorError::protocol_error("Invalid WebSocket version"))?,
);
}
if let Some(protocols) = &config.ws_protocols {
let protocols_str = protocols.join(", ");
req.headers_mut().insert(
"Sec-WebSocket-Protocol",
protocols_str
.parse()
.map_err(|_| ConnectorError::protocol_error("Invalid WebSocket protocols"))?,
);
}
let (ws, _) = connect_async(req).await?;
Ok(ws)
}
#[cfg(feature = "tls")]
async fn connect_with_ca_and_config(
url: &str,
ca_path: &str,
config: &ConnectorConfig,
) -> Result<WsStream> {
// Initialize the crypto provider for rustls
let _ = rustls::crypto::ring::default_provider().install_default();
let mut root = RootCertStore::empty();
let mut reader = BufReader::new(File::open(ca_path)?);
let mut der_certs = Vec::new();
while let Ok(Some(item)) = rustls_pemfile::read_one(&mut reader) {
if let Item::X509Certificate(der) = item {
der_certs.push(der);
}
}
root.add_parsable_certificates(der_certs);
let mut cfg = ClientConfig::builder()
.with_root_certificates(root)
.with_no_client_auth();
let mut req = url.into_client_request()?;
// Apply WebSocket protocol configuration
if let Some(version) = &config.ws_version {
req.headers_mut().insert(
"Sec-WebSocket-Version",
version
.parse()
.map_err(|_| ConnectorError::protocol_error("Invalid WebSocket version"))?,
);
}
if let Some(protocols) = &config.ws_protocols {
let protocols_str = protocols.join(", ");
req.headers_mut().insert(
"Sec-WebSocket-Protocol",
protocols_str
.parse()
.map_err(|_| ConnectorError::protocol_error("Invalid WebSocket protocols"))?,
);
}
if !config.verify_hostname {
#[derive(Debug)]
struct NoVerify;
impl ServerCertVerifier for NoVerify {
fn verify_server_cert(
&self,
_end_entity: &CertificateDer<'_>,
_intermediates: &[CertificateDer<'_>],
_server_name: &ServerName,
_ocsp_response: &[u8],
_now: UnixTime,
) -> std::result::Result<ServerCertVerified, rustls::Error> {
Ok(ServerCertVerified::assertion())
}
fn verify_tls12_signature(
&self,
_message: &[u8],
_cert: &CertificateDer<'_>,
_dss: &DigitallySignedStruct,
) -> std::result::Result<HandshakeSignatureValid, rustls::Error> {
Ok(HandshakeSignatureValid::assertion())
}
fn verify_tls13_signature(
&self,
_message: &[u8],
_cert: &CertificateDer<'_>,
_dss: &DigitallySignedStruct,
) -> std::result::Result<HandshakeSignatureValid, rustls::Error> {
Ok(HandshakeSignatureValid::assertion())
}
fn supported_verify_schemes(&self) -> Vec<SignatureScheme> {
vec![
SignatureScheme::ECDSA_NISTP256_SHA256,
SignatureScheme::ED25519,
SignatureScheme::RSA_PSS_SHA256,
]
}
}
cfg.dangerous().set_certificate_verifier(Arc::new(NoVerify));
// Note: hostname verification disabled (default). Set SOCKTOP_VERIFY_NAME=1 to enable strict SAN checking.
}
let cfg = Arc::new(cfg);
let (ws, _) = tokio_tungstenite::connect_async_tls_with_config(
req,
None,
config.verify_hostname,
Some(Connector::Rustls(cfg)),
)
.await?;
Ok(ws)
}
#[cfg(not(feature = "tls"))]
async fn connect_with_ca_and_config(
_url: &str,
_ca_path: &str,
_config: &ConnectorConfig,
) -> Result<WsStream> {
Err(ConnectorError::tls_error(
"TLS support not compiled in",
std::io::Error::new(std::io::ErrorKind::Unsupported, "TLS not available"),
))
}
#[cfg(feature = "tls")]
fn ensure_crypto_provider() {
let _ = ring::default_provider().install_default();
}

View File

@ -0,0 +1,7 @@
//! Networking module for native WebSocket connections.
pub mod connection;
pub mod requests;
pub use connection::*;
pub use requests::*;

View File

@ -0,0 +1,118 @@
//! WebSocket request handlers for native (non-WASM) environments.
use crate::networking::WsStream;
use crate::types::{JournalResponse, ProcessMetricsResponse};
use crate::utils::{gunzip_to_string, gunzip_to_vec, is_gzip};
use crate::{DiskInfo, Metrics, ProcessInfo, ProcessesPayload, pb};
use futures_util::{SinkExt, StreamExt};
use prost::Message as ProstMessage;
use tokio_tungstenite::tungstenite::Message;
/// Send a "get_metrics" request and await a single JSON reply
pub async fn request_metrics(ws: &mut WsStream) -> Option<Metrics> {
if ws.send(Message::Text("get_metrics".into())).await.is_err() {
return None;
}
match ws.next().await {
Some(Ok(Message::Binary(b))) => gunzip_to_string(&b)
.ok()
.and_then(|s| serde_json::from_str::<Metrics>(&s).ok()),
Some(Ok(Message::Text(json))) => serde_json::from_str::<Metrics>(&json).ok(),
_ => None,
}
}
/// Send a "get_disks" request and await a JSON Vec<DiskInfo>
pub async fn request_disks(ws: &mut WsStream) -> Option<Vec<DiskInfo>> {
if ws.send(Message::Text("get_disks".into())).await.is_err() {
return None;
}
match ws.next().await {
Some(Ok(Message::Binary(b))) => gunzip_to_string(&b)
.ok()
.and_then(|s| serde_json::from_str::<Vec<DiskInfo>>(&s).ok()),
Some(Ok(Message::Text(json))) => serde_json::from_str::<Vec<DiskInfo>>(&json).ok(),
_ => None,
}
}
/// Send a "get_processes" request and await a ProcessesPayload decoded from protobuf (binary, may be gzipped)
pub async fn request_processes(ws: &mut WsStream) -> Option<ProcessesPayload> {
if ws
.send(Message::Text("get_processes".into()))
.await
.is_err()
{
return None;
}
match ws.next().await {
Some(Ok(Message::Binary(b))) => {
let gz = is_gzip(&b);
let data = if gz { gunzip_to_vec(&b).ok()? } else { b };
match pb::Processes::decode(data.as_slice()) {
Ok(pb) => {
let rows: Vec<ProcessInfo> = pb
.rows
.into_iter()
.map(|p: pb::Process| ProcessInfo {
pid: p.pid,
name: p.name,
cpu_usage: p.cpu_usage,
mem_bytes: p.mem_bytes,
})
.collect();
Some(ProcessesPayload {
process_count: pb.process_count as usize,
top_processes: rows,
})
}
Err(e) => {
if std::env::var("SOCKTOP_DEBUG").ok().as_deref() == Some("1") {
eprintln!("protobuf decode failed: {e}");
}
// Fallback: maybe it's JSON (bytes already decompressed if gz)
match String::from_utf8(data) {
Ok(s) => serde_json::from_str::<ProcessesPayload>(&s).ok(),
Err(_) => None,
}
}
}
}
Some(Ok(Message::Text(json))) => serde_json::from_str::<ProcessesPayload>(&json).ok(),
_ => None,
}
}
/// Send a "get_process_metrics:{pid}" request and await a JSON ProcessMetricsResponse
pub async fn request_process_metrics(
ws: &mut WsStream,
pid: u32,
) -> Option<ProcessMetricsResponse> {
let request = format!("get_process_metrics:{pid}");
if ws.send(Message::Text(request)).await.is_err() {
return None;
}
match ws.next().await {
Some(Ok(Message::Binary(b))) => gunzip_to_string(&b)
.ok()
.and_then(|s| serde_json::from_str::<ProcessMetricsResponse>(&s).ok()),
Some(Ok(Message::Text(json))) => serde_json::from_str::<ProcessMetricsResponse>(&json).ok(),
_ => None,
}
}
/// Send a "get_journal_entries:{pid}" request and await a JSON JournalResponse
pub async fn request_journal_entries(ws: &mut WsStream, pid: u32) -> Option<JournalResponse> {
let request = format!("get_journal_entries:{pid}");
if ws.send(Message::Text(request)).await.is_err() {
return None;
}
match ws.next().await {
Some(Ok(Message::Binary(b))) => gunzip_to_string(&b)
.ok()
.and_then(|s| serde_json::from_str::<JournalResponse>(&s).ok()),
Some(Ok(Message::Text(json))) => serde_json::from_str::<JournalResponse>(&json).ok(),
_ => None,
}
}

View File

@ -0,0 +1,196 @@
//! Types that represent data from the socktop agent.
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct ProcessInfo {
pub pid: u32,
pub name: String,
pub cpu_usage: f32,
pub mem_bytes: u64,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct DiskInfo {
pub name: String,
pub total: u64,
pub available: u64,
#[serde(default)]
pub temperature: Option<f32>,
#[serde(default)]
pub is_partition: bool,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct NetworkInfo {
pub name: String,
pub received: u64,
pub transmitted: u64,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct GpuInfo {
pub name: Option<String>,
pub vendor: Option<String>,
// Accept both the new and legacy keys
#[serde(
default,
alias = "utilization_gpu_pct",
alias = "gpu_util_pct",
alias = "gpu_utilization"
)]
pub utilization: Option<f32>,
#[serde(default, alias = "mem_used_bytes", alias = "vram_used_bytes")]
pub mem_used: Option<u64>,
#[serde(default, alias = "mem_total_bytes", alias = "vram_total_bytes")]
pub mem_total: Option<u64>,
#[serde(default, alias = "temp_c", alias = "temperature_c")]
pub temp: Option<f32>,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct Metrics {
pub cpu_total: f32,
pub cpu_per_core: Vec<f32>,
pub mem_total: u64,
pub mem_used: u64,
pub swap_total: u64,
pub swap_used: u64,
pub hostname: String,
pub cpu_temp_c: Option<f32>,
pub disks: Vec<DiskInfo>,
pub networks: Vec<NetworkInfo>,
pub top_processes: Vec<ProcessInfo>,
pub gpus: Option<Vec<GpuInfo>>,
// New: keep the last reported total process count
#[serde(default)]
pub process_count: Option<usize>,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct ProcessesPayload {
pub process_count: usize,
pub top_processes: Vec<ProcessInfo>,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct ThreadInfo {
pub tid: u32, // Thread ID
pub name: String, // Thread name (from /proc/{pid}/task/{tid}/comm)
pub cpu_time_user: u64, // User CPU time in microseconds
pub cpu_time_system: u64, // System CPU time in microseconds
pub status: String, // Thread status (Running, Sleeping, etc.)
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct DetailedProcessInfo {
pub pid: u32,
pub name: String,
pub command: String,
pub cpu_usage: f32,
pub mem_bytes: u64,
pub virtual_mem_bytes: u64,
pub shared_mem_bytes: Option<u64>,
pub thread_count: u32,
pub fd_count: Option<u32>,
pub status: String,
pub parent_pid: Option<u32>,
pub user_id: u32,
pub group_id: u32,
pub start_time: u64, // Unix timestamp
pub cpu_time_user: u64, // Microseconds
pub cpu_time_system: u64, // Microseconds
pub read_bytes: Option<u64>,
pub write_bytes: Option<u64>,
pub working_directory: Option<String>,
pub executable_path: Option<String>,
pub child_processes: Vec<DetailedProcessInfo>,
pub threads: Vec<ThreadInfo>,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct ProcessMetricsResponse {
pub process: DetailedProcessInfo,
pub cached_at: u64, // Unix timestamp when this data was cached
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct JournalEntry {
pub timestamp: String, // ISO 8601 formatted timestamp
pub priority: LogLevel,
pub message: String,
pub unit: Option<String>, // systemd unit name
pub pid: Option<u32>,
pub comm: Option<String>, // process command name
pub uid: Option<u32>,
pub gid: Option<u32>,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub enum LogLevel {
Emergency = 0,
Alert = 1,
Critical = 2,
Error = 3,
Warning = 4,
Notice = 5,
Info = 6,
Debug = 7,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct JournalResponse {
pub entries: Vec<JournalEntry>,
pub total_count: u32,
pub truncated: bool,
pub cached_at: u64, // Unix timestamp when this data was cached
}
/// Request types that can be sent to the agent
#[derive(Debug, Clone, Serialize)]
#[serde(tag = "type")]
pub enum AgentRequest {
#[serde(rename = "metrics")]
Metrics,
#[serde(rename = "disks")]
Disks,
#[serde(rename = "processes")]
Processes,
#[serde(rename = "process_metrics")]
ProcessMetrics { pid: u32 },
#[serde(rename = "journal_entries")]
JournalEntries { pid: u32 },
}
impl AgentRequest {
/// Convert to the legacy string format used by the agent
pub fn to_legacy_string(&self) -> String {
match self {
AgentRequest::Metrics => "get_metrics".to_string(),
AgentRequest::Disks => "get_disks".to_string(),
AgentRequest::Processes => "get_processes".to_string(),
AgentRequest::ProcessMetrics { pid } => format!("get_process_metrics:{pid}"),
AgentRequest::JournalEntries { pid } => format!("get_journal_entries:{pid}"),
}
}
}
/// Response types that can be received from the agent
#[derive(Debug, Clone, Deserialize, Serialize)]
#[serde(tag = "type")]
pub enum AgentResponse {
#[serde(rename = "metrics")]
Metrics(Metrics),
#[serde(rename = "disks")]
Disks(Vec<DiskInfo>),
#[serde(rename = "processes")]
Processes(ProcessesPayload),
#[serde(rename = "process_metrics")]
ProcessMetrics(ProcessMetricsResponse),
#[serde(rename = "journal_entries")]
JournalEntries(JournalResponse),
}

View File

@ -0,0 +1,67 @@
//! Shared utilities for both networking and WASM implementations.
#[cfg(any(feature = "networking", feature = "wasm"))]
use flate2::read::GzDecoder;
#[cfg(any(feature = "networking", feature = "wasm"))]
use std::io::Read;
use crate::error::{ConnectorError, Result};
// WebSocket state constants
#[cfg(feature = "wasm")]
#[allow(dead_code)]
pub const WEBSOCKET_CONNECTING: u16 = 0;
#[cfg(feature = "wasm")]
#[allow(dead_code)]
pub const WEBSOCKET_OPEN: u16 = 1;
#[cfg(feature = "wasm")]
#[allow(dead_code)]
pub const WEBSOCKET_CLOSING: u16 = 2;
#[cfg(feature = "wasm")]
#[allow(dead_code)]
pub const WEBSOCKET_CLOSED: u16 = 3;
// Gzip magic header constants
pub const GZIP_MAGIC_1: u8 = 0x1f;
pub const GZIP_MAGIC_2: u8 = 0x8b;
/// Unified gzip decompression to string for both networking and WASM
#[cfg(any(feature = "networking", feature = "wasm"))]
pub fn gunzip_to_string(bytes: &[u8]) -> Result<String> {
let mut decoder = GzDecoder::new(bytes);
let mut decompressed = String::new();
decoder
.read_to_string(&mut decompressed)
.map_err(|e| ConnectorError::protocol_error(format!("Gzip decompression failed: {e}")))?;
Ok(decompressed)
}
/// Unified gzip decompression to bytes for both networking and WASM
#[cfg(any(feature = "networking", feature = "wasm"))]
pub fn gunzip_to_vec(bytes: &[u8]) -> Result<Vec<u8>> {
let mut decoder = GzDecoder::new(bytes);
let mut decompressed = Vec::new();
decoder
.read_to_end(&mut decompressed)
.map_err(|e| ConnectorError::protocol_error(format!("Gzip decompression failed: {e}")))?;
Ok(decompressed)
}
/// Unified gzip detection for both networking and WASM
#[cfg(any(feature = "networking", feature = "wasm"))]
pub fn is_gzip(bytes: &[u8]) -> bool {
bytes.len() >= 2 && bytes[0] == GZIP_MAGIC_1 && bytes[1] == GZIP_MAGIC_2
}
/// Unified debug logging for both networking and WASM modes
#[cfg(any(feature = "networking", feature = "wasm"))]
#[allow(dead_code)]
pub fn log_debug(message: &str) {
#[cfg(feature = "networking")]
if std::env::var("SOCKTOP_DEBUG").ok().as_deref() == Some("1") {
eprintln!("{message}");
}
#[cfg(all(feature = "wasm", not(feature = "networking")))]
eprintln!("{message}");
}

View File

@ -0,0 +1,66 @@
//! WebSocket connection handling for WASM environments.
use crate::config::ConnectorConfig;
use crate::error::{ConnectorError, Result};
use crate::utils::{WEBSOCKET_CLOSED, WEBSOCKET_CLOSING, WEBSOCKET_OPEN};
use wasm_bindgen::JsCast;
use wasm_bindgen::prelude::*;
use web_sys::WebSocket;
/// Connect to the agent using WASM WebSocket
pub async fn connect_to_agent(config: &ConnectorConfig) -> Result<WebSocket> {
let websocket = WebSocket::new(&config.url).map_err(|e| {
ConnectorError::protocol_error(format!("Failed to create WebSocket: {e:?}"))
})?;
// Set binary type for proper message handling
websocket.set_binary_type(web_sys::BinaryType::Arraybuffer);
// Wait for connection to be ready with proper async delays
let start_time = js_sys::Date::now();
let timeout_ms = 10000.0; // 10 second timeout (increased from 5)
// Poll connection status until ready or timeout
loop {
let ready_state = websocket.ready_state();
if ready_state == WEBSOCKET_OPEN {
// OPEN - connection is ready
break;
} else if ready_state == WEBSOCKET_CLOSED {
// CLOSED
return Err(ConnectorError::protocol_error(
"WebSocket connection closed",
));
} else if ready_state == WEBSOCKET_CLOSING {
// CLOSING
return Err(ConnectorError::protocol_error("WebSocket is closing"));
}
// Check timeout
let now = js_sys::Date::now();
if now - start_time > timeout_ms {
return Err(ConnectorError::protocol_error(
"WebSocket connection timeout",
));
}
// Proper async delay using setTimeout Promise
let promise = js_sys::Promise::new(&mut |resolve, _| {
let closure = Closure::once(move || resolve.call0(&JsValue::UNDEFINED));
web_sys::window()
.unwrap()
.set_timeout_with_callback_and_timeout_and_arguments_0(
closure.as_ref().unchecked_ref(),
100, // 100ms delay between polls
)
.unwrap();
closure.forget();
});
let _ = wasm_bindgen_futures::JsFuture::from(promise).await;
}
Ok(websocket)
}

View File

@ -0,0 +1,7 @@
//! WASM module for browser WebSocket connections.
pub mod connection;
pub mod requests;
pub use connection::*;
pub use requests::*;

View File

@ -0,0 +1,421 @@
//! WebSocket request handlers for WASM environments.
use crate::error::{ConnectorError, Result};
use crate::pb::Processes;
use crate::utils::{gunzip_to_string, gunzip_to_vec, is_gzip, log_debug};
use crate::{
AgentRequest, AgentResponse, DiskInfo, JournalResponse, Metrics, ProcessInfo,
ProcessMetricsResponse, ProcessesPayload,
};
use prost::Message as ProstMessage;
use std::cell::RefCell;
use std::rc::Rc;
use wasm_bindgen::JsCast;
use wasm_bindgen::prelude::*;
use web_sys::WebSocket;
/// Send a request and wait for response with binary data handling
pub async fn send_request_and_wait(
websocket: &WebSocket,
request: AgentRequest,
) -> Result<AgentResponse> {
// Use the legacy string format that the agent expects
let request_string = request.to_legacy_string();
// Send request
websocket
.send_with_str(&request_string)
.map_err(|e| ConnectorError::protocol_error(format!("Failed to send message: {e:?}")))?;
// Wait for response using JavaScript Promise
let (response, binary_data) = wait_for_response_with_binary(websocket).await?;
// Parse the response based on the request type
match request {
AgentRequest::Metrics => {
// Check if this is binary data (protobuf from agent)
if response.starts_with("BINARY_DATA:") {
// Extract the byte count
let byte_count: usize = response
.strip_prefix("BINARY_DATA:")
.unwrap_or("0")
.parse()
.unwrap_or(0);
// For now, return a placeholder metrics response indicating binary data received
// TODO: Implement proper protobuf decoding for binary data
let placeholder_metrics = Metrics {
cpu_total: 0.0,
cpu_per_core: vec![0.0],
mem_total: 0,
mem_used: 0,
swap_total: 0,
swap_used: 0,
hostname: format!("Binary protobuf data ({byte_count} bytes)"),
cpu_temp_c: None,
disks: vec![],
networks: vec![],
top_processes: vec![],
gpus: None,
process_count: None,
};
Ok(AgentResponse::Metrics(placeholder_metrics))
} else {
// Try to parse as JSON (fallback)
let metrics: Metrics = serde_json::from_str(&response).map_err(|e| {
ConnectorError::serialization_error(format!("Failed to parse metrics: {e}"))
})?;
Ok(AgentResponse::Metrics(metrics))
}
}
AgentRequest::Disks => {
let disks: Vec<DiskInfo> = serde_json::from_str(&response).map_err(|e| {
ConnectorError::serialization_error(format!("Failed to parse disks: {e}"))
})?;
Ok(AgentResponse::Disks(disks))
}
AgentRequest::Processes => {
log_debug(&format!(
"🔍 Processing process request - response: {}",
if response.len() > 100 {
format!("{}...", &response[..100])
} else {
response.clone()
}
));
log_debug(&format!(
"🔍 Binary data available: {}",
binary_data.is_some()
));
if let Some(ref data) = binary_data {
log_debug(&format!("🔍 Binary data size: {} bytes", data.len()));
// Check if it's gzipped data and decompress it first
if is_gzip(data) {
log_debug("🔍 Process data is gzipped, decompressing...");
match gunzip_to_vec(data) {
Ok(decompressed_bytes) => {
log_debug(&format!(
"🔍 Successfully decompressed {} bytes, now decoding protobuf...",
decompressed_bytes.len()
));
// Now decode the decompressed bytes as protobuf
match <Processes as ProstMessage>::decode(decompressed_bytes.as_slice())
{
Ok(protobuf_processes) => {
log_debug(&format!(
"✅ Successfully decoded {} processes from gzipped protobuf",
protobuf_processes.rows.len()
));
// Convert protobuf processes to ProcessInfo structs
let processes: Vec<ProcessInfo> = protobuf_processes
.rows
.into_iter()
.map(|p| ProcessInfo {
pid: p.pid,
name: p.name,
cpu_usage: p.cpu_usage,
mem_bytes: p.mem_bytes,
})
.collect();
let processes_payload = ProcessesPayload {
top_processes: processes,
process_count: protobuf_processes.process_count as usize,
};
return Ok(AgentResponse::Processes(processes_payload));
}
Err(e) => {
log_debug(&format!(
"❌ Failed to decode decompressed protobuf: {e}"
));
}
}
}
Err(e) => {
log_debug(&format!(
"❌ Failed to decompress gzipped process data: {e}"
));
}
}
}
}
// Check if this is binary data (protobuf from agent)
if response.starts_with("BINARY_DATA:") {
// Extract the binary data size and decode protobuf
let byte_count_str = response.strip_prefix("BINARY_DATA:").unwrap_or("0");
let _byte_count: usize = byte_count_str.parse().unwrap_or(0);
// Check if we have the actual binary data
if let Some(binary_bytes) = binary_data {
log_debug(&format!(
"🔧 Decoding {} bytes of protobuf process data",
binary_bytes.len()
));
// Try to decode the protobuf data using the prost Message trait
match <Processes as ProstMessage>::decode(&binary_bytes[..]) {
Ok(protobuf_processes) => {
log_debug(&format!(
"✅ Successfully decoded {} processes from protobuf",
protobuf_processes.rows.len()
));
// Convert protobuf processes to ProcessInfo structs
let processes: Vec<ProcessInfo> = protobuf_processes
.rows
.into_iter()
.map(|p| ProcessInfo {
pid: p.pid,
name: p.name,
cpu_usage: p.cpu_usage,
mem_bytes: p.mem_bytes,
})
.collect();
let processes_payload = ProcessesPayload {
top_processes: processes,
process_count: protobuf_processes.process_count as usize,
};
Ok(AgentResponse::Processes(processes_payload))
}
Err(e) => {
log_debug(&format!("❌ Failed to decode protobuf: {e}"));
// Fallback to empty processes
let processes = ProcessesPayload {
top_processes: vec![],
process_count: 0,
};
Ok(AgentResponse::Processes(processes))
}
}
} else {
log_debug(
"❌ Binary data indicator received but no actual binary data preserved",
);
let processes = ProcessesPayload {
top_processes: vec![],
process_count: 0,
};
Ok(AgentResponse::Processes(processes))
}
} else {
// Try to parse as JSON (fallback)
let processes: ProcessesPayload = serde_json::from_str(&response).map_err(|e| {
ConnectorError::serialization_error(format!("Failed to parse processes: {e}"))
})?;
Ok(AgentResponse::Processes(processes))
}
}
AgentRequest::ProcessMetrics { pid: _ } => {
// Parse JSON response for process metrics
let process_metrics: ProcessMetricsResponse =
serde_json::from_str(&response).map_err(|e| {
ConnectorError::serialization_error(format!(
"Failed to parse process metrics: {e}"
))
})?;
Ok(AgentResponse::ProcessMetrics(process_metrics))
}
AgentRequest::JournalEntries { pid: _ } => {
// Parse JSON response for journal entries
let journal_entries: JournalResponse =
serde_json::from_str(&response).map_err(|e| {
ConnectorError::serialization_error(format!(
"Failed to parse journal entries: {e}"
))
})?;
Ok(AgentResponse::JournalEntries(journal_entries))
}
}
}
async fn wait_for_response_with_binary(websocket: &WebSocket) -> Result<(String, Option<Vec<u8>>)> {
let start_time = js_sys::Date::now();
let timeout_ms = 10000.0; // 10 second timeout
// Store the response in a shared location
let response_cell = Rc::new(RefCell::new(None::<String>));
let binary_data_cell = Rc::new(RefCell::new(None::<Vec<u8>>));
let error_cell = Rc::new(RefCell::new(None::<String>));
// Use a unique request ID to avoid message collision
let _request_id = js_sys::Math::random();
let response_received = Rc::new(RefCell::new(false));
// Set up the message handler that only processes if we haven't gotten a response yet
{
let response_cell = response_cell.clone();
let binary_data_cell = binary_data_cell.clone();
let response_received = response_received.clone();
let onmessage_callback = Closure::wrap(Box::new(move |e: web_sys::MessageEvent| {
// Only process if we haven't already received a response for this request
if !*response_received.borrow() {
// Handle text messages (JSON responses for metrics/disks)
if let Ok(data) = e.data().dyn_into::<js_sys::JsString>() {
let message = data.as_string().unwrap_or_default();
if !message.is_empty() {
// Debug: Log what we received (truncated)
let preview = if message.len() > 100 {
format!("{}...", &message[..100])
} else {
message.clone()
};
log_debug(&format!("🔍 Received text: {preview}"));
*response_cell.borrow_mut() = Some(message);
*response_received.borrow_mut() = true;
}
}
// Handle binary messages (could be JSON as text bytes or actual protobuf)
else if let Ok(array_buffer) = e.data().dyn_into::<js_sys::ArrayBuffer>() {
let uint8_array = js_sys::Uint8Array::new(&array_buffer);
let length = uint8_array.length() as usize;
let mut bytes = vec![0u8; length];
uint8_array.copy_to(&mut bytes);
log_debug(&format!("🔍 Received binary data: {length} bytes"));
// Debug: Log the first few bytes to see what we're dealing with
let first_bytes = if bytes.len() >= 4 {
format!(
"0x{:02x} 0x{:02x} 0x{:02x} 0x{:02x}",
bytes[0], bytes[1], bytes[2], bytes[3]
)
} else {
format!("Only {} bytes available", bytes.len())
};
log_debug(&format!("🔍 First bytes: {first_bytes}"));
// Try to decode as UTF-8 text first (in case it's JSON sent as binary)
match String::from_utf8(bytes.clone()) {
Ok(text) => {
// If it decodes to valid UTF-8, check if it looks like JSON
let trimmed = text.trim();
if (trimmed.starts_with('{') && trimmed.ends_with('}'))
|| (trimmed.starts_with('[') && trimmed.ends_with(']'))
{
log_debug(&format!(
"🔍 Binary data is actually JSON text: {}",
if text.len() > 100 {
format!("{}...", &text[..100])
} else {
text.clone()
}
));
*response_cell.borrow_mut() = Some(text);
*response_received.borrow_mut() = true;
} else {
log_debug(&format!(
"🔍 Binary data is UTF-8 text but not JSON: {}",
if text.len() > 100 {
format!("{}...", &text[..100])
} else {
text.clone()
}
));
*response_cell.borrow_mut() = Some(text);
*response_received.borrow_mut() = true;
}
}
Err(_) => {
// If it's not valid UTF-8, check if it's gzipped data
if is_gzip(&bytes) {
log_debug(&format!(
"🔍 Binary data appears to be gzipped ({length} bytes)"
));
// Try to decompress using unified gzip decompression
match gunzip_to_string(&bytes) {
Ok(decompressed_text) => {
log_debug(&format!(
"🔍 Gzipped data decompressed to text: {}",
if decompressed_text.len() > 100 {
format!("{}...", &decompressed_text[..100])
} else {
decompressed_text.clone()
}
));
*response_cell.borrow_mut() = Some(decompressed_text);
*response_received.borrow_mut() = true;
}
Err(e) => {
log_debug(&format!("🔍 Failed to decompress gzip: {e}"));
// Fallback: treat as actual binary protobuf data
*binary_data_cell.borrow_mut() = Some(bytes.clone());
*response_cell.borrow_mut() =
Some(format!("BINARY_DATA:{length}"));
*response_received.borrow_mut() = true;
}
}
} else {
// If it's not valid UTF-8 and not gzipped, it's likely actual binary protobuf data
log_debug(&format!(
"🔍 Binary data is actual protobuf ({length} bytes)"
));
*binary_data_cell.borrow_mut() = Some(bytes);
*response_cell.borrow_mut() = Some(format!("BINARY_DATA:{length}"));
*response_received.borrow_mut() = true;
}
}
}
} else {
// Log what type of data we got
log_debug(&format!("🔍 Received unknown data type: {:?}", e.data()));
}
}
}) as Box<dyn FnMut(_)>);
websocket.set_onmessage(Some(onmessage_callback.as_ref().unchecked_ref()));
onmessage_callback.forget();
}
// Set up the error handler
{
let error_cell = error_cell.clone();
let response_received = response_received.clone();
let onerror_callback = Closure::wrap(Box::new(move |_e: web_sys::ErrorEvent| {
if !*response_received.borrow() {
*error_cell.borrow_mut() = Some("WebSocket error occurred".to_string());
*response_received.borrow_mut() = true;
}
}) as Box<dyn FnMut(_)>);
websocket.set_onerror(Some(onerror_callback.as_ref().unchecked_ref()));
onerror_callback.forget();
}
// Poll for response with proper async delays
loop {
// Check for response
if *response_received.borrow() {
if let Some(response) = response_cell.borrow().as_ref() {
let binary_data = binary_data_cell.borrow().clone();
return Ok((response.clone(), binary_data));
}
if let Some(error) = error_cell.borrow().as_ref() {
return Err(ConnectorError::protocol_error(error));
}
}
// Check timeout
let now = js_sys::Date::now();
if now - start_time > timeout_ms {
*response_received.borrow_mut() = true; // Mark as done to prevent future processing
return Err(ConnectorError::protocol_error("WebSocket response timeout"));
}
// Wait 50ms before checking again
let promise = js_sys::Promise::new(&mut |resolve, _| {
let closure = Closure::once(move || resolve.call0(&JsValue::UNDEFINED));
web_sys::window()
.unwrap()
.set_timeout_with_callback_and_timeout_and_arguments_0(
closure.as_ref().unchecked_ref(),
50,
)
.unwrap();
closure.forget();
});
let _ = wasm_bindgen_futures::JsFuture::from(promise).await;
}
}

View File

@ -0,0 +1,51 @@
use socktop_connector::{
AgentRequest, AgentResponse, connect_to_socktop_agent, connect_to_socktop_agent_with_tls,
};
// Integration probe: only runs when SOCKTOP_WS is set to an agent WebSocket URL.
// Example: SOCKTOP_WS=ws://127.0.0.1:3000/ws cargo test -p socktop_connector --test integration_test -- --nocapture
#[tokio::test]
async fn probe_ws_endpoints() {
// Gate the test to avoid CI failures when no agent is running.
let url = match std::env::var("SOCKTOP_WS") {
Ok(v) if !v.is_empty() => v,
_ => {
eprintln!(
"skipping ws_probe: set SOCKTOP_WS=ws://host:port/ws to run this integration test"
);
return;
}
};
// Optional pinned CA for WSS/self-signed setups
let tls_ca = std::env::var("SOCKTOP_TLS_CA").ok();
let mut connector = if let Some(ca_path) = tls_ca {
connect_to_socktop_agent_with_tls(&url, ca_path, true)
.await
.expect("connect ws with TLS")
} else {
connect_to_socktop_agent(&url).await.expect("connect ws")
};
// Should get fast metrics quickly
let response = connector.request(AgentRequest::Metrics).await;
assert!(response.is_ok(), "expected Metrics payload within timeout");
if let Ok(AgentResponse::Metrics(_)) = response {
// Success
} else {
panic!("expected Metrics response");
}
// Processes may be gzipped and a bit slower, but should arrive
let response = connector.request(AgentRequest::Processes).await;
assert!(
response.is_ok(),
"expected Processes payload within timeout"
);
if let Ok(AgentResponse::Processes(_)) = response {
// Success
} else {
panic!("expected Processes response");
}
}

15
socktop_wasm_test/.gitignore vendored Normal file
View File

@ -0,0 +1,15 @@
# Build artifacts
/target/
/pkg/
# IDE files
.vscode/
.idea/
# OS files
.DS_Store
Thumbs.db
# Backup files
*~
*.bak

741
socktop_wasm_test/Cargo.lock generated Normal file
View File

@ -0,0 +1,741 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 4
[[package]]
name = "adler2"
version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "320119579fcad9c21884f5c4861d16174d0e06250625266f50fe6898340abefa"
[[package]]
name = "aho-corasick"
version = "1.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e60d3430d3a69478ad0993f19238d2df97c507009a52b3c10addcd7f6bcb916"
dependencies = [
"memchr",
]
[[package]]
name = "anyhow"
version = "1.0.99"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b0674a1ddeecb70197781e945de4b3b8ffb61fa939a5597bcf48503737663100"
[[package]]
name = "bitflags"
version = "2.9.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2261d10cca569e4643e526d8dc2e62e433cc8aba21ab764233731f8d369bf394"
[[package]]
name = "bumpalo"
version = "3.19.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "46c5e41b57b8bba42a04676d81cb89e9ee8e859a1a66f80a5a72e1cb76b34d43"
[[package]]
name = "bytes"
version = "1.10.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d71b6127be86fdcfddb610f7182ac57211d4b18a3e9c82eb2d17662f2227ad6a"
[[package]]
name = "cfg-if"
version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2fd1289c04a9ea8cb22300a459a72a385d7c73d3259e2ed7dcb2af674838cfa9"
[[package]]
name = "console_error_panic_hook"
version = "0.1.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a06aeb73f470f66dcdbf7223caeebb85984942f22f1adb2a088cf9668146bbbc"
dependencies = [
"cfg-if",
"wasm-bindgen",
]
[[package]]
name = "crc32fast"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9481c1c90cbf2ac953f07c8d4a58aa3945c425b7185c9154d67a65e4230da511"
dependencies = [
"cfg-if",
]
[[package]]
name = "either"
version = "1.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "48c757948c5ede0e46177b7add2e67155f70e33c07fea8284df6576da70b3719"
[[package]]
name = "equivalent"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "877a4ace8713b0bcf2a4e7eec82529c029f1d0619886d18145fea96c3ffe5c0f"
[[package]]
name = "errno"
version = "0.3.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "778e2ac28f6c47af28e4907f13ffd1e1ddbd400980a9abd7c8df189bf578a5ad"
dependencies = [
"libc",
"windows-sys",
]
[[package]]
name = "fastrand"
version = "2.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
[[package]]
name = "fixedbitset"
version = "0.5.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1d674e81391d1e1ab681a28d99df07927c6d4aa5b027d7da16ba32d1d21ecd99"
[[package]]
name = "flate2"
version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4a3d7db9596fecd151c5f638c0ee5d5bd487b6e0ea232e5dc96d5250f6f94b1d"
dependencies = [
"crc32fast",
"miniz_oxide",
]
[[package]]
name = "getrandom"
version = "0.2.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "335ff9f135e4384c8150d6f27c6daed433577f86b4750418338c01a1a2528592"
dependencies = [
"cfg-if",
"js-sys",
"libc",
"wasi 0.11.1+wasi-snapshot-preview1",
"wasm-bindgen",
]
[[package]]
name = "getrandom"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "26145e563e54f2cadc477553f1ec5ee650b00862f0a58bcd12cbdc5f0ea2d2f4"
dependencies = [
"cfg-if",
"libc",
"r-efi",
"wasi 0.14.4+wasi-0.2.4",
]
[[package]]
name = "hashbrown"
version = "0.15.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9229cfe53dfd69f0609a49f65461bd93001ea1ef889cd5529dd176593f5338a1"
[[package]]
name = "heck"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
[[package]]
name = "indexmap"
version = "2.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f2481980430f9f78649238835720ddccc57e52df14ffce1c6f37391d61b563e9"
dependencies = [
"equivalent",
"hashbrown",
]
[[package]]
name = "itertools"
version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2b192c782037fadd9cfa75548310488aabdbf3d2da73885b31bd0abd03351285"
dependencies = [
"either",
]
[[package]]
name = "itoa"
version = "1.0.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4a5f13b858c8d314ee3e8f639011f7ccefe71f97f96e50151fb991f267928e2c"
[[package]]
name = "js-sys"
version = "0.3.78"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0c0b063578492ceec17683ef2f8c5e89121fbd0b172cbc280635ab7567db2738"
dependencies = [
"once_cell",
"wasm-bindgen",
]
[[package]]
name = "libc"
version = "0.2.175"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6a82ae493e598baaea5209805c49bbf2ea7de956d50d7da0da1164f9c6d28543"
[[package]]
name = "linux-raw-sys"
version = "0.9.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cd945864f07fe9f5371a27ad7b52a172b4b499999f1d97574c9fa68373937e12"
[[package]]
name = "log"
version = "0.4.28"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "34080505efa8e45a4b816c349525ebe327ceaa8559756f0356cba97ef3bf7432"
[[package]]
name = "memchr"
version = "2.7.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32a282da65faaf38286cf3be983213fcf1d2e2a58700e808f83f4ea9a4804bc0"
[[package]]
name = "miniz_oxide"
version = "0.8.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1fa76a2c86f704bdb222d66965fb3d63269ce38518b83cb0575fca855ebb6316"
dependencies = [
"adler2",
]
[[package]]
name = "multimap"
version = "0.10.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1d87ecb2933e8aeadb3e3a02b828fed80a7528047e68b4f424523a0981a3a084"
[[package]]
name = "once_cell"
version = "1.21.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d"
[[package]]
name = "petgraph"
version = "0.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3672b37090dbd86368a4145bc067582552b29c27377cad4e0a306c97f9bd7772"
dependencies = [
"fixedbitset",
"indexmap",
]
[[package]]
name = "prettyplease"
version = "0.2.37"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "479ca8adacdd7ce8f1fb39ce9ecccbfe93a3f1344b3d0d97f20bc0196208f62b"
dependencies = [
"proc-macro2",
"syn",
]
[[package]]
name = "proc-macro2"
version = "1.0.101"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "89ae43fd86e4158d6db51ad8e2b80f313af9cc74f5c0e03ccb87de09998732de"
dependencies = [
"unicode-ident",
]
[[package]]
name = "prost"
version = "0.13.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2796faa41db3ec313a31f7624d9286acf277b52de526150b7e69f3debf891ee5"
dependencies = [
"bytes",
"prost-derive",
]
[[package]]
name = "prost-build"
version = "0.13.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "be769465445e8c1474e9c5dac2018218498557af32d9ed057325ec9a41ae81bf"
dependencies = [
"heck",
"itertools",
"log",
"multimap",
"once_cell",
"petgraph",
"prettyplease",
"prost",
"prost-types",
"regex",
"syn",
"tempfile",
]
[[package]]
name = "prost-derive"
version = "0.13.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8a56d757972c98b346a9b766e3f02746cde6dd1cd1d1d563472929fdd74bec4d"
dependencies = [
"anyhow",
"itertools",
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "prost-types"
version = "0.13.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "52c2c1bf36ddb1a1c396b3601a3cec27c2462e45f07c386894ec3ccf5332bd16"
dependencies = [
"prost",
]
[[package]]
name = "protoc-bin-vendored"
version = "3.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d1c381df33c98266b5f08186583660090a4ffa0889e76c7e9a5e175f645a67fa"
dependencies = [
"protoc-bin-vendored-linux-aarch_64",
"protoc-bin-vendored-linux-ppcle_64",
"protoc-bin-vendored-linux-s390_64",
"protoc-bin-vendored-linux-x86_32",
"protoc-bin-vendored-linux-x86_64",
"protoc-bin-vendored-macos-aarch_64",
"protoc-bin-vendored-macos-x86_64",
"protoc-bin-vendored-win32",
]
[[package]]
name = "protoc-bin-vendored-linux-aarch_64"
version = "3.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c350df4d49b5b9e3ca79f7e646fde2377b199e13cfa87320308397e1f37e1a4c"
[[package]]
name = "protoc-bin-vendored-linux-ppcle_64"
version = "3.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a55a63e6c7244f19b5c6393f025017eb5d793fd5467823a099740a7a4222440c"
[[package]]
name = "protoc-bin-vendored-linux-s390_64"
version = "3.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1dba5565db4288e935d5330a07c264a4ee8e4a5b4a4e6f4e83fad824cc32f3b0"
[[package]]
name = "protoc-bin-vendored-linux-x86_32"
version = "3.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8854774b24ee28b7868cd71dccaae8e02a2365e67a4a87a6cd11ee6cdbdf9cf5"
[[package]]
name = "protoc-bin-vendored-linux-x86_64"
version = "3.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b38b07546580df720fa464ce124c4b03630a6fb83e05c336fea2a241df7e5d78"
[[package]]
name = "protoc-bin-vendored-macos-aarch_64"
version = "3.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "89278a9926ce312e51f1d999fee8825d324d603213344a9a706daa009f1d8092"
[[package]]
name = "protoc-bin-vendored-macos-x86_64"
version = "3.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "81745feda7ccfb9471d7a4de888f0652e806d5795b61480605d4943176299756"
[[package]]
name = "protoc-bin-vendored-win32"
version = "3.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "95067976aca6421a523e491fce939a3e65249bac4b977adee0ee9771568e8aa3"
[[package]]
name = "quote"
version = "1.0.40"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1885c039570dc00dcb4ff087a89e185fd56bae234ddc7f056a945bf36467248d"
dependencies = [
"proc-macro2",
]
[[package]]
name = "r-efi"
version = "5.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "69cdb34c158ceb288df11e18b4bd39de994f6657d83847bdffdbd7f346754b0f"
[[package]]
name = "regex"
version = "1.11.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "23d7fd106d8c02486a8d64e778353d1cffe08ce79ac2e82f540c86d0facf6912"
dependencies = [
"aho-corasick",
"memchr",
"regex-automata",
"regex-syntax",
]
[[package]]
name = "regex-automata"
version = "0.4.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6b9458fa0bfeeac22b5ca447c63aaf45f28439a709ccd244698632f9aa6394d6"
dependencies = [
"aho-corasick",
"memchr",
"regex-syntax",
]
[[package]]
name = "regex-syntax"
version = "0.8.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "caf4aa5b0f434c91fe5c7f1ecb6a5ece2130b02ad2a590589dda5146df959001"
[[package]]
name = "rustix"
version = "1.0.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "11181fbabf243db407ef8df94a6ce0b2f9a733bd8be4ad02b4eda9602296cac8"
dependencies = [
"bitflags",
"errno",
"libc",
"linux-raw-sys",
"windows-sys",
]
[[package]]
name = "rustversion"
version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b39cdef0fa800fc44525c84ccb54a029961a8215f9619753635a9c0d2538d46d"
[[package]]
name = "ryu"
version = "1.0.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f"
[[package]]
name = "serde"
version = "1.0.219"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5f0e2c6ed6606019b4e29e69dbaba95b11854410e5347d525002456dbbb786b6"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.219"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b0276cf7f2c73365f7157c8123c21cd9a50fbbd844757af28ca1f5925fc2a00"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "serde_json"
version = "1.0.143"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d401abef1d108fbd9cbaebc3e46611f4b1021f714a0597a71f41ee463f5f4a5a"
dependencies = [
"itoa",
"memchr",
"ryu",
"serde",
]
[[package]]
name = "socktop_connector"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3a63dadaa5105df11b0684759a829012257d48e72a469cc554c0cf4394605f5a"
dependencies = [
"flate2",
"js-sys",
"prost",
"prost-build",
"protoc-bin-vendored",
"serde",
"serde_json",
"thiserror",
"wasm-bindgen",
"wasm-bindgen-futures",
"web-sys",
]
[[package]]
name = "socktop_wasm_test"
version = "0.1.0"
dependencies = [
"console_error_panic_hook",
"getrandom 0.2.16",
"js-sys",
"serde",
"serde_json",
"socktop_connector",
"wasm-bindgen",
"wasm-bindgen-futures",
"web-sys",
]
[[package]]
name = "syn"
version = "2.0.106"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ede7c438028d4436d71104916910f5bb611972c5cfd7f89b8300a8186e6fada6"
dependencies = [
"proc-macro2",
"quote",
"unicode-ident",
]
[[package]]
name = "tempfile"
version = "3.21.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "15b61f8f20e3a6f7e0649d825294eaf317edce30f82cf6026e7e4cb9222a7d1e"
dependencies = [
"fastrand",
"getrandom 0.3.3",
"once_cell",
"rustix",
"windows-sys",
]
[[package]]
name = "thiserror"
version = "2.0.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3467d614147380f2e4e374161426ff399c91084acd2363eaf549172b3d5e60c0"
dependencies = [
"thiserror-impl",
]
[[package]]
name = "thiserror-impl"
version = "2.0.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6c5e1be1c48b9172ee610da68fd9cd2770e7a4056cb3fc98710ee6906f0c7960"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "unicode-ident"
version = "1.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5a5f39404a5da50712a4c1eecf25e90dd62b613502b7e925fd4e4d19b5c96512"
[[package]]
name = "wasi"
version = "0.11.1+wasi-snapshot-preview1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ccf3ec651a847eb01de73ccad15eb7d99f80485de043efb2f370cd654f4ea44b"
[[package]]
name = "wasi"
version = "0.14.4+wasi-0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "88a5f4a424faf49c3c2c344f166f0662341d470ea185e939657aaff130f0ec4a"
dependencies = [
"wit-bindgen",
]
[[package]]
name = "wasm-bindgen"
version = "0.2.101"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7e14915cadd45b529bb8d1f343c4ed0ac1de926144b746e2710f9cd05df6603b"
dependencies = [
"cfg-if",
"once_cell",
"rustversion",
"wasm-bindgen-macro",
"wasm-bindgen-shared",
]
[[package]]
name = "wasm-bindgen-backend"
version = "0.2.101"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e28d1ba982ca7923fd01448d5c30c6864d0a14109560296a162f80f305fb93bb"
dependencies = [
"bumpalo",
"log",
"proc-macro2",
"quote",
"syn",
"wasm-bindgen-shared",
]
[[package]]
name = "wasm-bindgen-futures"
version = "0.4.51"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0ca85039a9b469b38336411d6d6ced91f3fc87109a2a27b0c197663f5144dffe"
dependencies = [
"cfg-if",
"js-sys",
"once_cell",
"wasm-bindgen",
"web-sys",
]
[[package]]
name = "wasm-bindgen-macro"
version = "0.2.101"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c3d463ae3eff775b0c45df9da45d68837702ac35af998361e2c84e7c5ec1b0d"
dependencies = [
"quote",
"wasm-bindgen-macro-support",
]
[[package]]
name = "wasm-bindgen-macro-support"
version = "0.2.101"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7bb4ce89b08211f923caf51d527662b75bdc9c9c7aab40f86dcb9fb85ac552aa"
dependencies = [
"proc-macro2",
"quote",
"syn",
"wasm-bindgen-backend",
"wasm-bindgen-shared",
]
[[package]]
name = "wasm-bindgen-shared"
version = "0.2.101"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f143854a3b13752c6950862c906306adb27c7e839f7414cec8fea35beab624c1"
dependencies = [
"unicode-ident",
]
[[package]]
name = "web-sys"
version = "0.3.78"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "77e4b637749ff0d92b8fad63aa1f7cff3cbe125fd49c175cd6345e7272638b12"
dependencies = [
"js-sys",
"wasm-bindgen",
]
[[package]]
name = "windows-link"
version = "0.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5e6ad25900d524eaabdbbb96d20b4311e1e7ae1699af4fb28c17ae66c80d798a"
[[package]]
name = "windows-sys"
version = "0.60.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f2f500e4d28234f72040990ec9d39e3a6b950f9f22d3dba18416c35882612bcb"
dependencies = [
"windows-targets",
]
[[package]]
name = "windows-targets"
version = "0.53.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d5fe6031c4041849d7c496a8ded650796e7b6ecc19df1a431c1a363342e5dc91"
dependencies = [
"windows-link",
"windows_aarch64_gnullvm",
"windows_aarch64_msvc",
"windows_i686_gnu",
"windows_i686_gnullvm",
"windows_i686_msvc",
"windows_x86_64_gnu",
"windows_x86_64_gnullvm",
"windows_x86_64_msvc",
]
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "86b8d5f90ddd19cb4a147a5fa63ca848db3df085e25fee3cc10b39b6eebae764"
[[package]]
name = "windows_aarch64_msvc"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c7651a1f62a11b8cbd5e0d42526e55f2c99886c77e007179efff86c2b137e66c"
[[package]]
name = "windows_i686_gnu"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c1dc67659d35f387f5f6c479dc4e28f1d4bb90ddd1a5d3da2e5d97b42d6272c3"
[[package]]
name = "windows_i686_gnullvm"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ce6ccbdedbf6d6354471319e781c0dfef054c81fbc7cf83f338a4296c0cae11"
[[package]]
name = "windows_i686_msvc"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "581fee95406bb13382d2f65cd4a908ca7b1e4c2f1917f143ba16efe98a589b5d"
[[package]]
name = "windows_x86_64_gnu"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2e55b5ac9ea33f2fc1716d1742db15574fd6fc8dadc51caab1c16a3d3b4190ba"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0a6e035dd0599267ce1ee132e51c27dd29437f63325753051e71dd9e42406c57"
[[package]]
name = "windows_x86_64_msvc"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "271414315aff87387382ec3d271b52d7ae78726f5d44ac98b4f4030c91880486"
[[package]]
name = "wit-bindgen"
version = "0.45.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c573471f125075647d03df72e026074b7203790d41351cd6edc96f46bcccd36"

View File

@ -0,0 +1,36 @@
[package]
name = "socktop_wasm_test"
version = "0.1.0"
edition = "2021"
# Make this a standalone package, not part of the parent workspace
[workspace]
[lib]
crate-type = ["cdylib"]
[dependencies]
# Use WASM features for WebSocket connectivity (published version)
socktop_connector = { version = "0.1.5", default-features = false, features = ["wasm"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
wasm-bindgen = "0.2"
wasm-bindgen-futures = "0.4"
console_error_panic_hook = "0.1"
js-sys = "0.3"
[dependencies.web-sys]
version = "0.3"
features = [
"console",
"WebSocket",
"MessageEvent",
"ErrorEvent",
"CloseEvent",
"BinaryType",
]
# Enable JS feature for WASM random number generation
[dependencies.getrandom]
version = "0.2"
features = ["js"]

150
socktop_wasm_test/README.md Normal file
View File

@ -0,0 +1,150 @@
# WASM Compatibility Guide for socktop_connector
This directory contains a complete WebAssembly (WASM) compatibility test and implementation guide for the `socktop_connector` library.
## Overview
`socktop_connector` provides **full WebSocket networking support** for WebAssembly environments. The library includes complete connectivity functionality with automatic compression and protobuf decoding, making it easy to connect to socktop agents directly from browser applications.
## What Works in WASM
- ✅ **Full WebSocket connections** (`ws://` connections)
- ✅ **All request types** (`AgentRequest::Metrics`, `AgentRequest::Disks`, `AgentRequest::Processes`)
- ✅ **Automatic data processing**: Gzip decompression for metrics/disks, protobuf decoding for processes
- ✅ Configuration types (`ConnectorConfig`)
- ✅ Request/Response types (`AgentRequest`, `AgentResponse`)
- ✅ JSON serialization/deserialization of all types
- ✅ Protocol and version configuration builders
- ✅ All type-safe validation and error handling
## What Doesn't Work in WASM
- ❌ TLS connections (`wss://`) - use `ws://` only
- ❌ TLS certificate handling (use non-TLS endpoints)
## Quick Start - WASM Test Page
```bash
# Please note that the test assumes you have and agent runnign on your local host at port 3000. If you would like to use an alternate configuration please update lib.rs prior to build.
# Build the WASM package
wasm-pack build --target web --out-dir pkg
# Serve the test page
basic-http-server . --addr 127.0.0.1:8000
# Open http://127.0.0.1:8000 in your browser
# Check the browser console for test results
```
<img src="./screenshot_09092025_134458.jpg" width="85%">
## WASM Dependencies
The test uses the WASM-compatible networking features:
```toml
[dependencies]
socktop_connector = { version = "0.1.5", default-features = false, features = ["wasm"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
wasm-bindgen = "0.2"
console_error_panic_hook = "0.1"
[dependencies.web-sys]
version = "0.3"
features = ["console"]
```
**Key**: Use `features = ["wasm"]` to enable full WebSocket networking support in WASM builds.
## Implementation Strategy
### 1. Use socktop_connector Types for Configuration
```rust
use wasm_bindgen::prelude::*;
use socktop_connector::{ConnectorConfig, AgentRequest, AgentResponse};
#[wasm_bindgen]
pub fn create_config() -> String {
// Use socktop_connector types for type-safe configuration
let config = ConnectorConfig::new("ws://localhost:3000/ws")
.with_protocols(vec!["socktop".to_string(), "v1".to_string()])
.with_version("13".to_string());
// Return JSON for use with browser WebSocket API
serde_json::to_string(&config).unwrap_or_default()
}
```
### 2. Create Type-Safe Requests
```rust
#[wasm_bindgen]
pub fn create_metrics_request() -> String {
let request = AgentRequest::Metrics;
serde_json::to_string(&request).unwrap_or_default()
}
#[wasm_bindgen]
pub fn create_processes_request() -> String {
let request = AgentRequest::Processes;
serde_json::to_string(&request).unwrap_or_default()
}
```
### 3. Parse Responses with Type Safety
```rust
#[wasm_bindgen]
pub fn parse_metrics_response(json: &str) -> Option<String> {
match serde_json::from_str::<AgentResponse>(json) {
Ok(AgentResponse::Metrics(metrics)) => {
Some(format!("CPU: {}%, Memory: {}MB",
metrics.cpu_total,
metrics.mem_used / 1024 / 1024))
}
_ => None
}
}
```
### 4. Browser Integration
Then in JavaScript:
```javascript
import init, {
create_config,
create_metrics_request,
parse_metrics_response
} from './pkg/socktop_wasm_test.js';
async function run() {
await init();
// Use type-safe configuration
const configJson = create_config();
const config = JSON.parse(configJson);
// Create WebSocket with proper protocols
const ws = new WebSocket(config.url, config.ws_protocols);
ws.onopen = () => {
// Send type-safe requests
ws.send(create_metrics_request());
};
ws.onmessage = (event) => {
// Handle responses with type safety
const result = parse_metrics_response(event.data);
if (result) {
console.log(result);
}
};
}
run();
```

View File

@ -0,0 +1,154 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Socktop Connector WASM Test</title>
<style>
body { font-family: monospace; padding: 20px; background-color: #f5f5f5; }
.container { max-width: 800px; margin: 0 auto; background: white; padding: 20px; border-radius: 8px; box-shadow: 0 2px 4px rgba(0,0,0,0.1); }
.log { margin: 5px 0; padding: 5px; border-radius: 4px; }
.success { color: #0a7c0a; background-color: #e8f5e8; }
.warning { color: #b8860b; background-color: #fdf6e3; }
.error { color: #d2322d; background-color: #f9e6e6; }
.info { color: #0969da; background-color: #e6f3ff; }
button {
background: #0969da;
color: white;
border: none;
padding: 10px 20px;
border-radius: 4px;
cursor: pointer;
font-size: 14px;
margin: 10px 0;
}
button:hover { background: #0757c7; }
button:disabled { background: #ccc; cursor: not-allowed; }
.server-input {
margin: 10px 0;
padding: 8px;
width: 300px;
border: 1px solid #ddd;
border-radius: 4px;
font-family: monospace;
}
.input-group { margin: 15px 0; }
.input-group label { display: block; margin-bottom: 5px; font-weight: bold; }
#output {
border: 1px solid #ddd;
border-radius: 4px;
padding: 10px;
min-height: 200px;
background: #fafafa;
font-family: 'Courier New', monospace;
}
.status { font-weight: bold; margin: 10px 0; }
</style>
</head>
<body>
<div class="container">
<h1>🦀 Socktop Connector WASM Test</h1>
<div class="status">
<p><strong>Test Purpose:</strong> Verify socktop_connector works in WebAssembly without TLS dependencies</p>
<p><strong>Status:</strong> <span id="status">Loading WASM module...</span></p>
</div>
<div class="input-group">
<label for="server-url">Server URL:</label>
<input type="text" id="server-url" class="server-input" value="ws://localhost:3000/ws"
placeholder="ws://localhost:3000/ws">
</div>
<button id="test-btn" disabled>Run WASM Test</button>
<button id="clear-btn">Clear Output</button>
<h3>Output:</h3>
<div id="output"></div>
<h3>ICON LEGEND:</h3>
<ul>
<li><strong>Success:</strong> No rustls/TLS errors, connector loads in WASM</li>
<li>⚠️ <strong>Expected:</strong> Connection failures without running socktop_agent</li>
<li><strong>Failure:</strong> Build errors or TLS dependency issues</li>
</ul>
<p><small>💡 <strong>Tip:</strong> start socktop_agent with: <code>socktop_agent --port 3000</code></small></p>
</div>
<script type="module">
import init, { test_socktop_connector } from './pkg/socktop_wasm_test.js';
const output = document.getElementById('output');
const testBtn = document.getElementById('test-btn');
const clearBtn = document.getElementById('clear-btn');
const status = document.getElementById('status');
// Capture console output and display it on page
const originalLog = console.log;
const originalError = console.error;
function addLog(text, type = 'info') {
const div = document.createElement('div');
div.className = `log ${type}`;
div.textContent = new Date().toLocaleTimeString() + ' - ' + text;
output.appendChild(div);
output.scrollTop = output.scrollHeight;
}
console.log = function(...args) {
originalLog.apply(console, args);
const text = args.join(' ');
let type = 'info';
if (text.includes('✅')) {
type = 'success';
} else if (text.includes('⚠️')) {
type = 'warning';
} else if (text.includes('❌')) {
type = 'error';
}
addLog(text, type);
};
console.error = function(...args) {
originalError.apply(console, args);
addLog('ERROR: ' + args.join(' '), 'error');
};
clearBtn.onclick = () => {
output.innerHTML = '';
};
async function run() {
try {
await init();
addLog('WASM module initialized successfully!', 'success');
status.textContent = 'Ready to test';
testBtn.disabled = false;
testBtn.onclick = () => {
testBtn.disabled = true;
const serverUrl = document.getElementById('server-url').value.trim();
addLog('=== Starting WASM Test ===', 'info');
addLog(`🌐 Using server: ${serverUrl}`, 'info');
try {
test_socktop_connector(serverUrl || undefined);
setTimeout(() => {
testBtn.disabled = false;
}, 2000);
} catch (e) {
addLog('Test execution failed: ' + e.message, 'error');
testBtn.disabled = false;
}
};
} catch (e) {
addLog('Failed to initialize WASM: ' + e.message, 'error');
status.textContent = 'Failed to load WASM module';
console.error('WASM initialization error:', e);
}
}
run();
</script>
</body>
</html>

Binary file not shown.

After

Width:  |  Height:  |  Size: 214 KiB

Some files were not shown because too many files have changed in this diff Show More