Files
flutter_opencode_client/lib/data/provider/server/all.dart
GT610 c0f98e41c8 feat: Bring back completely re-optimized "Connection Stats" feature (#1090)
* feat (Connection Statistics): Restored the server connection statistics feature

* perf(store): Optimize data storage performance and implement caching mechanisms

- Implement caching mechanisms in SnippetStore and ServerStore to reduce redundant loading
- Refactor ConnectionStatsStore to use indexes and optimize query performance
- Adopt a more efficient approach when cleaning up expired records
- Add a maximum record limit to prevent data bloat

* perf(store): Optimize data storage performance and add a caching mechanism

Add a caching mechanism to PrivateKeyStore to reduce redundant loading

Make the cleanup and index rebuilding of ConnectionStatsStore asynchronous

Add database compression and size statistics
Display database size in the interface and optimize compression operations

* fix (Cache): Fixed cache invalidation and join statistics issues

- Added a cache invalidation call to the reload method
- Fixed an error in the calculation of join statistics timestamps
- Optimized the cache index rebuild logic
- Added tooltips and click effects for join statistics

* refactor(connection_stats): Convert file operations from synchronous to asynchronous and optimize record cleanup logic

Convert the database size retrieval method from synchronous to asynchronous to prevent UI blocking

Optimize server record cleanup logic by directly deleting redundant records instead of rebuilding indexes

* fix(connection_stats): Fixed an initialization issue when the index database is empty

During Stores initialization, the code now checks whether `connectionStats.indexDbKeys` is empty; if so, it calls `rebuildIndexAndCompact` to rebuild and compact the database. Additionally, the implementation of the `_pruneExcessRecords` method has been optimized to use tuples instead of temporary lists, thereby improving performance. A `mounted` check has been added at the UI layer to prevent state update issues during asynchronous operations.

* fix(server): Improved error string matching logic to more accurately identify connection issues

Error strings are now uniformly converted to lowercase for comparison, and matching criteria have been expanded to cover a wider range of error scenarios, including timeouts, authentication failures, and network errors

* fix(PrivateKeyStore): Fixed an issue where the cache state was not updated when clearing the cache

When clearing the private key store, ensure that the internal cache state is updated simultaneously to maintain consistency

* refactor(store): Add close methods and clean up subscription logic

Add close methods to PrivateKeyStore, SnippetStore, and ServerStore to unsubscribe

Unify cache cleanup logic to prevent memory leaks

* fix(store): Add a cache update suppression mechanism to prevent circular updates

Add an _suppressWatch flag to multiple Store classes to suppress cache invalidation during internal operations

Add a _putWithoutInvalidatingCache method to prevent recursive watchers from being triggered during data updates

* refactor(store): Improve caching and state management using try-finally

In PrivateKeyStore, ServerStore, and SnippetStore:
1. Remove redundant close methods
2. Use try-finally to ensure the _suppressWatch state is reset correctly
3. Optimize cache invalidation logic
4. Standardize transaction handling for update operations

* refactor(store): Optimize data storage operations and fix potential issues

- Ensure the safety and consistency of list operations in ConnectionStatsStore
- Replace direct calls to `box.put` with the `set` method in SnippetStore and ServerStore
- Extract decoding logic for PrivateKeyStore into a separate method
- Add logic to update server-hopping relationships

* fix: Fixed an issue where asynchronous operations were not being waited on and optimized storage operations

Fixed several issues where asynchronous operations were not being waited on to ensure data consistency

Added the _suppressWatch control to ServerStore and PrivateKeyStore

Optimized index management in ConnectionStatsStore to maintain record order

Added a new GitHub participant

* fix: Fixed potential state issues and memory leaks in asynchronous operations

Fixed potential state issues that could occur on the server edit page after a delete operation; added a mounted check

Changed the statistics clearing operation in connection_stats to run asynchronously

Optimized asynchronous operations in PrivateKeyStore and fixed potential memory leaks

* refactor(store): Convert asynchronous methods to synchronous ones to simplify the code

Fixed an issue where asynchronous operations were not handled correctly on the connection statistics page

* fix: Added mounted check and error handling for connection logs

Added a mounted check in _ConnectionStatsPageState to prevent the state from being updated after the component is unmounted

Added a try-catch block for connection logs in ServerNotifier to catch and log potential storage exceptions
2026-03-27 17:14:07 +08:00

334 lines
9.5 KiB
Dart

import 'dart:async';
import 'package:fl_lib/fl_lib.dart';
import 'package:freezed_annotation/freezed_annotation.dart';
import 'package:riverpod_annotation/riverpod_annotation.dart';
import 'package:server_box/core/sync.dart';
import 'package:server_box/data/model/server/server.dart';
import 'package:server_box/data/model/server/server_private_info.dart';
import 'package:server_box/data/model/server/try_limiter.dart';
import 'package:server_box/data/provider/server/single.dart';
import 'package:server_box/data/res/store.dart';
import 'package:server_box/data/ssh/session_manager.dart';
part 'all.freezed.dart';
part 'all.g.dart';
@freezed
abstract class ServersState with _$ServersState {
const factory ServersState({
@Default({}) Map<String, Spi> servers,
@Default([]) List<String> serverOrder,
@Default(<String>{}) Set<String> tags,
@Default(<String>{}) Set<String> manualDisconnectedIds,
Timer? autoRefreshTimer,
}) = _ServersState;
}
@Riverpod(keepAlive: true)
class ServersNotifier extends _$ServersNotifier {
@override
ServersState build() {
return _load();
}
Future<void> reload() async {
Stores.server.invalidateCache();
final newState = _load();
if (newState == state) return;
state = newState;
await refresh();
}
ServersState _load() {
final spis = Stores.server.fetch();
final newServers = <String, Spi>{};
final newServerOrder = <String>[];
for (final spi in spis) {
newServers[spi.id] = spi;
}
final serverOrder_ = Stores.setting.serverOrder.fetch();
if (serverOrder_.isNotEmpty) {
spis.reorder(order: serverOrder_, finder: (n, id) => n.id == id);
newServerOrder.addAll(spis.map((e) => e.id));
} else {
newServerOrder.addAll(newServers.keys);
}
// Must use [equals] to compare [Order] here.
if (!newServerOrder.equals(serverOrder_)) {
Stores.setting.serverOrder.put(newServerOrder);
}
final newTags = _calculateTags(newServers);
return stateOrNull?.copyWith(servers: newServers, serverOrder: newServerOrder, tags: newTags) ??
ServersState(servers: newServers, serverOrder: newServerOrder, tags: newTags);
}
Set<String> _calculateTags(Map<String, Spi> servers) {
final tags = <String>{};
for (final spi in servers.values) {
final spiTags = spi.tags;
if (spiTags == null) continue;
for (final t in spiTags) {
tags.add(t);
}
}
return tags;
}
/// Get a [Spi] by [spi] or [id].
///
/// Priority: [spi] > [id]
Spi? pick({Spi? spi, String? id}) {
if (spi != null) {
return state.servers[spi.id];
}
if (id != null) {
return state.servers[id];
}
return null;
}
/// if [spi] is specificed then only refresh this server
/// [onlyFailed] only refresh failed servers
Future<void> refresh({Spi? spi, bool onlyFailed = false}) async {
if (spi != null) {
final newManualDisconnected = Set<String>.from(state.manualDisconnectedIds)..remove(spi.id);
state = state.copyWith(manualDisconnectedIds: newManualDisconnected);
final serverNotifier = ref.read(serverProvider(spi.id).notifier);
await serverNotifier.refresh();
return;
}
final serversToRefresh = <MapEntry<String, Spi>>[];
final idsToResetLimiter = <String>[];
for (final entry in state.servers.entries) {
final serverId = entry.key;
final spi = entry.value;
if (state.manualDisconnectedIds.contains(serverId)) continue;
final serverState = ref.read(serverProvider(serverId));
if (onlyFailed) {
if (serverState.conn != ServerConn.failed) continue;
idsToResetLimiter.add(serverId);
}
if (serverState.conn == ServerConn.disconnected && !spi.autoConnect) continue;
serversToRefresh.add(entry);
}
for (final id in idsToResetLimiter) {
TryLimiter.reset(id);
}
for (final entry in serversToRefresh) {
final serverNotifier = ref.read(serverProvider(entry.key).notifier);
serverNotifier.refresh().ignore();
}
}
Future<void> startAutoRefresh() async {
var duration = Stores.setting.serverStatusUpdateInterval.fetch();
stopAutoRefresh();
if (duration == 0) return;
if (duration <= 1 || duration > 10) {
Loggers.app.warning('Invalid duration: $duration, use default 3');
duration = 3;
}
final timer = Timer.periodic(Duration(seconds: duration), (_) async {
await refresh();
});
state = state.copyWith(autoRefreshTimer: timer);
}
void stopAutoRefresh() {
final timer = state.autoRefreshTimer;
if (timer != null) {
timer.cancel();
}
state = state.copyWith(autoRefreshTimer: null);
}
bool get isAutoRefreshOn => state.autoRefreshTimer != null;
void setDisconnected() {
for (final serverId in state.servers.keys) {
final serverNotifier = ref.read(serverProvider(serverId).notifier);
serverNotifier.updateConnection(ServerConn.disconnected);
// Update SSH session status to disconnected
final sessionId = 'ssh_$serverId';
TermSessionManager.updateStatus(sessionId, TermSessionStatus.disconnected);
}
//TryLimiter.clear();
}
void closeServer({String? id}) {
if (id == null) {
for (final serverId in state.servers.keys) {
closeOneServer(serverId);
}
return;
}
closeOneServer(id);
}
void closeOneServer(String id) {
final spi = state.servers[id];
if (spi == null) {
Loggers.app.warning('Server with id $id not found');
return;
}
final serverNotifier = ref.read(serverProvider(id).notifier);
serverNotifier.closeConnection();
final newManualDisconnected = Set<String>.from(state.manualDisconnectedIds)..add(id);
state = state.copyWith(manualDisconnectedIds: newManualDisconnected);
// Remove SSH session when server is manually closed
final sessionId = 'ssh_$id';
TermSessionManager.remove(sessionId);
}
void addServer(Spi spi) {
final newServers = Map<String, Spi>.from(state.servers);
newServers[spi.id] = spi;
final newOrder = List<String>.from(state.serverOrder)..add(spi.id);
final newTags = _calculateTags(newServers);
state = state.copyWith(servers: newServers, serverOrder: newOrder, tags: newTags);
Stores.server.put(spi);
Stores.setting.serverOrder.put(newOrder);
refresh(spi: spi);
bakSync.sync(milliDelay: 1000);
}
Future<void> delServer(String id) async {
final newServers = Map<String, Spi>.from(state.servers);
newServers.remove(id);
final newOrder = List<String>.from(state.serverOrder)..remove(id);
final newTags = _calculateTags(newServers);
state = state.copyWith(servers: newServers, serverOrder: newOrder, tags: newTags);
Stores.setting.serverOrder.put(newOrder);
Stores.server.delete(id);
await Stores.connectionStats.clearServerStats(id);
// Remove SSH session when server is deleted
final sessionId = 'ssh_$id';
TermSessionManager.remove(sessionId);
bakSync.sync(milliDelay: 1000);
}
Future<void> deleteAll() async {
// Remove all SSH sessions before clearing servers
for (final id in state.servers.keys) {
final sessionId = 'ssh_$id';
TermSessionManager.remove(sessionId);
}
state = const ServersState();
Stores.setting.serverOrder.put([]);
Stores.server.clear();
await Stores.connectionStats.clearAll();
bakSync.sync(milliDelay: 1000);
}
void updateServerOrder(List<String> order) {
final seen = <String>{};
final newOrder = <String>[];
for (final id in order) {
if (!state.servers.containsKey(id)) {
continue;
}
if (!seen.add(id)) {
continue;
}
newOrder.add(id);
}
for (final id in state.servers.keys) {
if (seen.add(id)) {
newOrder.add(id);
}
}
if (_isSameOrder(newOrder, state.serverOrder)) {
return;
}
state = state.copyWith(serverOrder: newOrder);
Stores.setting.serverOrder.put(newOrder);
bakSync.sync(milliDelay: 1000);
}
bool _isSameOrder(List<String> a, List<String> b) {
if (identical(a, b)) {
return true;
}
if (a.length != b.length) {
return false;
}
for (var i = 0; i < a.length; i++) {
if (a[i] != b[i]) {
return false;
}
}
return true;
}
Future<void> updateServer(Spi old, Spi newSpi) async {
if (old != newSpi) {
Stores.server.update(old, newSpi);
final newServers = Map<String, Spi>.from(state.servers);
final newOrder = List<String>.from(state.serverOrder);
if (newSpi.id != old.id) {
newServers[newSpi.id] = newSpi;
newServers.remove(old.id);
newOrder.update(old.id, newSpi.id);
Stores.setting.serverOrder.put(newOrder);
// Update SSH session ID when server ID changes
final oldSessionId = 'ssh_${old.id}';
TermSessionManager.remove(oldSessionId);
// Session will be re-added when reconnecting if necessary
} else {
newServers[old.id] = newSpi;
// Update SPI in the corresponding IndividualServerNotifier
final serverNotifier = ref.read(serverProvider(old.id).notifier);
serverNotifier.updateSpi(newSpi);
}
final newTags = _calculateTags(newServers);
state = state.copyWith(servers: newServers, serverOrder: newOrder, tags: newTags);
// Only reconnect if neccessary
if (newSpi.shouldReconnect(old)) {
// Use [newSpi.id] instead of [old.id] because [old.id] may be changed
TryLimiter.reset(newSpi.id);
refresh(spi: newSpi);
}
}
bakSync.sync(milliDelay: 1000);
}
}