Files
flutter_opencode_client/lib/data/store/connection_stats.dart
GT610 c0f98e41c8 feat: Bring back completely re-optimized "Connection Stats" feature (#1090)
* feat (Connection Statistics): Restored the server connection statistics feature

* perf(store): Optimize data storage performance and implement caching mechanisms

- Implement caching mechanisms in SnippetStore and ServerStore to reduce redundant loading
- Refactor ConnectionStatsStore to use indexes and optimize query performance
- Adopt a more efficient approach when cleaning up expired records
- Add a maximum record limit to prevent data bloat

* perf(store): Optimize data storage performance and add a caching mechanism

Add a caching mechanism to PrivateKeyStore to reduce redundant loading

Make the cleanup and index rebuilding of ConnectionStatsStore asynchronous

Add database compression and size statistics
Display database size in the interface and optimize compression operations

* fix (Cache): Fixed cache invalidation and join statistics issues

- Added a cache invalidation call to the reload method
- Fixed an error in the calculation of join statistics timestamps
- Optimized the cache index rebuild logic
- Added tooltips and click effects for join statistics

* refactor(connection_stats): Convert file operations from synchronous to asynchronous and optimize record cleanup logic

Convert the database size retrieval method from synchronous to asynchronous to prevent UI blocking

Optimize server record cleanup logic by directly deleting redundant records instead of rebuilding indexes

* fix(connection_stats): Fixed an initialization issue when the index database is empty

During Stores initialization, the code now checks whether `connectionStats.indexDbKeys` is empty; if so, it calls `rebuildIndexAndCompact` to rebuild and compact the database. Additionally, the implementation of the `_pruneExcessRecords` method has been optimized to use tuples instead of temporary lists, thereby improving performance. A `mounted` check has been added at the UI layer to prevent state update issues during asynchronous operations.

* fix(server): Improved error string matching logic to more accurately identify connection issues

Error strings are now uniformly converted to lowercase for comparison, and matching criteria have been expanded to cover a wider range of error scenarios, including timeouts, authentication failures, and network errors

* fix(PrivateKeyStore): Fixed an issue where the cache state was not updated when clearing the cache

When clearing the private key store, ensure that the internal cache state is updated simultaneously to maintain consistency

* refactor(store): Add close methods and clean up subscription logic

Add close methods to PrivateKeyStore, SnippetStore, and ServerStore to unsubscribe

Unify cache cleanup logic to prevent memory leaks

* fix(store): Add a cache update suppression mechanism to prevent circular updates

Add an _suppressWatch flag to multiple Store classes to suppress cache invalidation during internal operations

Add a _putWithoutInvalidatingCache method to prevent recursive watchers from being triggered during data updates

* refactor(store): Improve caching and state management using try-finally

In PrivateKeyStore, ServerStore, and SnippetStore:
1. Remove redundant close methods
2. Use try-finally to ensure the _suppressWatch state is reset correctly
3. Optimize cache invalidation logic
4. Standardize transaction handling for update operations

* refactor(store): Optimize data storage operations and fix potential issues

- Ensure the safety and consistency of list operations in ConnectionStatsStore
- Replace direct calls to `box.put` with the `set` method in SnippetStore and ServerStore
- Extract decoding logic for PrivateKeyStore into a separate method
- Add logic to update server-hopping relationships

* fix: Fixed an issue where asynchronous operations were not being waited on and optimized storage operations

Fixed several issues where asynchronous operations were not being waited on to ensure data consistency

Added the _suppressWatch control to ServerStore and PrivateKeyStore

Optimized index management in ConnectionStatsStore to maintain record order

Added a new GitHub participant

* fix: Fixed potential state issues and memory leaks in asynchronous operations

Fixed potential state issues that could occur on the server edit page after a delete operation; added a mounted check

Changed the statistics clearing operation in connection_stats to run asynchronously

Optimized asynchronous operations in PrivateKeyStore and fixed potential memory leaks

* refactor(store): Convert asynchronous methods to synchronous ones to simplify the code

Fixed an issue where asynchronous operations were not handled correctly on the connection statistics page

* fix: Added mounted check and error handling for connection logs

Added a mounted check in _ConnectionStatsPageState to prevent the state from being updated after the component is unmounted

Added a try-catch block for connection logs in ServerNotifier to catch and log potential storage exceptions
2026-03-27 17:14:07 +08:00

277 lines
7.8 KiB
Dart

import 'dart:io';
import 'package:fl_lib/fl_lib.dart';
import 'package:hive_ce/hive.dart';
import 'package:server_box/data/model/server/connection_stat.dart';
class ConnectionStatsStore extends HiveStore {
ConnectionStatsStore._() : super('connection_stats');
static final instance = ConnectionStatsStore._();
static const _indexBoxName = 'conn_stats_index';
static const _maxRecordsPerServer = 100;
late final Box<dynamic> _indexBox;
@override
Future<void> init() async {
await super.init();
_indexBox = await Hive.openBox(
_indexBoxName,
path: box.path?.substring(0, box.path!.lastIndexOf(Pfs.seperator)),
);
}
Future<void> rebuildIndexAndCompact() async {
await _cleanAllOldAndRebuildIndex();
await _compactIfNeeded();
}
Future<void> _rebuildIndexCore() async {
final cutoffTime = DateTime.now().subtract(const Duration(days: 30));
final serverIdToKeys = <String, List<String>>{};
for (final key in keys().toList()) {
final stat = get<ConnectionStat>(key);
if (stat == null) continue;
if (stat.timestamp.isBefore(cutoffTime)) {
remove(key);
continue;
}
final serverId = stat.serverId;
serverIdToKeys.putIfAbsent(serverId, () => []).add(key);
}
final idxKeysToDelete = _indexBox.keys.where((k) => k.toString().startsWith('idx_')).toList();
for (final k in idxKeysToDelete) {
await _indexBox.delete(k);
}
for (final entry in serverIdToKeys.entries) {
final keys = entry.value;
if (keys.length > _maxRecordsPerServer) {
final keyStatPairs = <(String, ConnectionStat)>[];
for (final key in keys) {
final stat = get<ConnectionStat>(key);
if (stat != null) keyStatPairs.add((key, stat));
}
keyStatPairs.sort((a, b) => b.$2.timestamp.compareTo(a.$2.timestamp));
final toKeep = keyStatPairs.take(_maxRecordsPerServer).map((p) => p.$1).toList().reversed.toList();
final toRemove = keyStatPairs.skip(_maxRecordsPerServer);
for (final pair in toRemove) {
remove(pair.$1);
}
await _indexBox.put('idx_${entry.key}', toKeep);
} else {
await _indexBox.put('idx_${entry.key}', keys);
}
}
}
Future<void> _cleanAllOldAndRebuildIndex() async {
await _rebuildIndexCore();
}
Future<void> _compactIfNeeded() async {
try {
await box.compact();
await _indexBox.compact();
} catch (e, st) {
Loggers.app.warning('Auto compact failed during init', e, st);
}
}
Future<void> _updateIndex(String serverId, String recordKey) async {
final indexKey = 'idx_$serverId';
final keys = (_indexBox.get(indexKey) as List?)?.cast<String>().toList() ?? [];
if (!keys.contains(recordKey)) {
keys.add(recordKey);
if (keys.length > _maxRecordsPerServer) {
await _pruneExcessRecords(serverId, keys);
}
await _indexBox.put(indexKey, keys);
}
}
Future<void> _pruneExcessRecords(String serverId, List<String> keys) async {
if (keys.length <= _maxRecordsPerServer) return;
final keyStatPairs = <(String, ConnectionStat)>[];
for (final key in keys) {
final stat = get<ConnectionStat>(key);
if (stat != null) {
keyStatPairs.add((key, stat));
}
}
keyStatPairs.sort((a, b) => b.$2.timestamp.compareTo(a.$2.timestamp));
final toRemove = keyStatPairs.skip(_maxRecordsPerServer);
for (final pair in toRemove) {
remove(pair.$1);
keys.remove(pair.$1);
}
}
Future<void> recordConnection(ConnectionStat stat) async {
final key = '${stat.serverId}_${stat.timestamp.millisecondsSinceEpoch}';
set(key, stat);
await _updateIndex(stat.serverId, key);
}
ServerConnectionStats getServerStats(String serverId, String serverName) {
final allStats = getConnectionHistory(serverId);
if (allStats.isEmpty) {
return ServerConnectionStats(
serverId: serverId,
serverName: serverName,
totalAttempts: 0,
successCount: 0,
failureCount: 0,
recentConnections: [],
successRate: 0.0,
);
}
final totalAttempts = allStats.length;
final successCount = allStats.where((s) => s.result.isSuccess).length;
final failureCount = totalAttempts - successCount;
final successRate = totalAttempts > 0 ? (successCount / totalAttempts) : 0.0;
final successTimes = allStats
.where((s) => s.result.isSuccess)
.map((s) => s.timestamp)
.toList();
final failureTimes = allStats
.where((s) => !s.result.isSuccess)
.map((s) => s.timestamp)
.toList();
DateTime? lastSuccessTime;
DateTime? lastFailureTime;
if (successTimes.isNotEmpty) {
successTimes.sort((a, b) => b.compareTo(a));
lastSuccessTime = successTimes.first;
}
if (failureTimes.isNotEmpty) {
failureTimes.sort((a, b) => b.compareTo(a));
lastFailureTime = failureTimes.first;
}
final recentConnections = allStats.take(20).toList();
return ServerConnectionStats(
serverId: serverId,
serverName: serverName,
totalAttempts: totalAttempts,
successCount: successCount,
failureCount: failureCount,
lastSuccessTime: lastSuccessTime,
lastFailureTime: lastFailureTime,
recentConnections: recentConnections,
successRate: successRate,
);
}
List<ConnectionStat> getConnectionHistory(String serverId) {
final indexKey = 'idx_$serverId';
final keys = (_indexBox.get(indexKey) as List?)?.cast<String>() ?? [];
final stats = <ConnectionStat>[];
for (final key in keys) {
final stat = get<ConnectionStat>(key);
if (stat != null) {
stats.add(stat);
}
}
stats.sort((a, b) => b.timestamp.compareTo(a.timestamp));
return stats;
}
List<ServerConnectionStats> getAllServerStats() {
final indexKeys = _indexBox.keys
.where((k) => k is String && k.startsWith('idx_'))
.cast<String>()
.toList();
final allStats = <ServerConnectionStats>[];
for (final indexKey in indexKeys) {
final serverId = indexKey.substring(4);
final keys = (_indexBox.get(indexKey) as List?)?.cast<String>() ?? [];
if (keys.isEmpty) continue;
String? serverName;
for (final key in keys.reversed) {
final stat = get<ConnectionStat>(key);
if (stat != null) {
serverName = stat.serverName;
break;
}
}
if (serverName == null) continue;
final stats = getServerStats(serverId, serverName);
allStats.add(stats);
}
return allStats;
}
Future<void> clearAll() async {
await box.clear();
await _indexBox.clear();
}
Future<void> clearServerStats(String serverId) async {
final indexKey = 'idx_$serverId';
final keys = (_indexBox.get(indexKey) as List?)?.cast<String>() ?? [];
for (final key in keys) {
remove(key);
}
await _indexBox.delete(indexKey);
}
Future<void> compact() async {
Loggers.app.info('Start compacting connection_stats database...');
try {
await box.compact();
await _indexBox.compact();
Loggers.app.info('Finished compacting connection_stats database');
} catch (e, st) {
Loggers.app.warning('Failed compacting connection_stats database', e, st);
rethrow;
}
}
String? get dbPath => box.path;
String? get indexDbPath => _indexBox.path;
Iterable<dynamic> get indexDbKeys => _indexBox.keys.where((k) => k.toString().startsWith('idx_'));
Future<int> dbSizeAsync() async {
final path = dbPath;
if (path == null) return 0;
final file = File(path);
return await file.exists() ? await file.length() : 0;
}
Future<int> indexDbSizeAsync() async {
final path = indexDbPath;
if (path == null) return 0;
final file = File(path);
return await file.exists() ? await file.length() : 0;
}
}