feat: Bring back completely re-optimized "Connection Stats" feature (#1090)

* feat (Connection Statistics): Restored the server connection statistics feature

* perf(store): Optimize data storage performance and implement caching mechanisms

- Implement caching mechanisms in SnippetStore and ServerStore to reduce redundant loading
- Refactor ConnectionStatsStore to use indexes and optimize query performance
- Adopt a more efficient approach when cleaning up expired records
- Add a maximum record limit to prevent data bloat

* perf(store): Optimize data storage performance and add a caching mechanism

Add a caching mechanism to PrivateKeyStore to reduce redundant loading

Make the cleanup and index rebuilding of ConnectionStatsStore asynchronous

Add database compression and size statistics
Display database size in the interface and optimize compression operations

* fix (Cache): Fixed cache invalidation and join statistics issues

- Added a cache invalidation call to the reload method
- Fixed an error in the calculation of join statistics timestamps
- Optimized the cache index rebuild logic
- Added tooltips and click effects for join statistics

* refactor(connection_stats): Convert file operations from synchronous to asynchronous and optimize record cleanup logic

Convert the database size retrieval method from synchronous to asynchronous to prevent UI blocking

Optimize server record cleanup logic by directly deleting redundant records instead of rebuilding indexes

* fix(connection_stats): Fixed an initialization issue when the index database is empty

During Stores initialization, the code now checks whether `connectionStats.indexDbKeys` is empty; if so, it calls `rebuildIndexAndCompact` to rebuild and compact the database. Additionally, the implementation of the `_pruneExcessRecords` method has been optimized to use tuples instead of temporary lists, thereby improving performance. A `mounted` check has been added at the UI layer to prevent state update issues during asynchronous operations.

* fix(server): Improved error string matching logic to more accurately identify connection issues

Error strings are now uniformly converted to lowercase for comparison, and matching criteria have been expanded to cover a wider range of error scenarios, including timeouts, authentication failures, and network errors

* fix(PrivateKeyStore): Fixed an issue where the cache state was not updated when clearing the cache

When clearing the private key store, ensure that the internal cache state is updated simultaneously to maintain consistency

* refactor(store): Add close methods and clean up subscription logic

Add close methods to PrivateKeyStore, SnippetStore, and ServerStore to unsubscribe

Unify cache cleanup logic to prevent memory leaks

* fix(store): Add a cache update suppression mechanism to prevent circular updates

Add an _suppressWatch flag to multiple Store classes to suppress cache invalidation during internal operations

Add a _putWithoutInvalidatingCache method to prevent recursive watchers from being triggered during data updates

* refactor(store): Improve caching and state management using try-finally

In PrivateKeyStore, ServerStore, and SnippetStore:
1. Remove redundant close methods
2. Use try-finally to ensure the _suppressWatch state is reset correctly
3. Optimize cache invalidation logic
4. Standardize transaction handling for update operations

* refactor(store): Optimize data storage operations and fix potential issues

- Ensure the safety and consistency of list operations in ConnectionStatsStore
- Replace direct calls to `box.put` with the `set` method in SnippetStore and ServerStore
- Extract decoding logic for PrivateKeyStore into a separate method
- Add logic to update server-hopping relationships

* fix: Fixed an issue where asynchronous operations were not being waited on and optimized storage operations

Fixed several issues where asynchronous operations were not being waited on to ensure data consistency

Added the _suppressWatch control to ServerStore and PrivateKeyStore

Optimized index management in ConnectionStatsStore to maintain record order

Added a new GitHub participant

* fix: Fixed potential state issues and memory leaks in asynchronous operations

Fixed potential state issues that could occur on the server edit page after a delete operation; added a mounted check

Changed the statistics clearing operation in connection_stats to run asynchronously

Optimized asynchronous operations in PrivateKeyStore and fixed potential memory leaks

* refactor(store): Convert asynchronous methods to synchronous ones to simplify the code

Fixed an issue where asynchronous operations were not handled correctly on the connection statistics page

* fix: Added mounted check and error handling for connection logs

Added a mounted check in _ConnectionStatsPageState to prevent the state from being updated after the component is unmounted

Added a try-catch block for connection logs in ServerNotifier to catch and log potential storage exceptions
This commit is contained in:
GT610
2026-03-27 17:14:07 +08:00
committed by GitHub
parent fa4ac00ced
commit c0f98e41c8
19 changed files with 880 additions and 174 deletions

View File

@@ -20,6 +20,7 @@ class PrivateKeyNotifier extends _$PrivateKeyNotifier {
}
void reload() {
Stores.key.invalidateCache();
final newState = _load();
if (newState == state) return;
state = newState;

View File

@@ -33,6 +33,7 @@ class ServersNotifier extends _$ServersNotifier {
}
Future<void> reload() async {
Stores.server.invalidateCache();
final newState = _load();
if (newState == state) return;
state = newState;
@@ -213,7 +214,7 @@ class ServersNotifier extends _$ServersNotifier {
bakSync.sync(milliDelay: 1000);
}
void delServer(String id) {
Future<void> delServer(String id) async {
final newServers = Map<String, Spi>.from(state.servers);
newServers.remove(id);
@@ -225,6 +226,8 @@ class ServersNotifier extends _$ServersNotifier {
Stores.setting.serverOrder.put(newOrder);
Stores.server.delete(id);
await Stores.connectionStats.clearServerStats(id);
// Remove SSH session when server is deleted
final sessionId = 'ssh_$id';
TermSessionManager.remove(sessionId);
@@ -232,7 +235,7 @@ class ServersNotifier extends _$ServersNotifier {
bakSync.sync(milliDelay: 1000);
}
void deleteAll() {
Future<void> deleteAll() async {
// Remove all SSH sessions before clearing servers
for (final id in state.servers.keys) {
final sessionId = 'ssh_$id';
@@ -243,6 +246,7 @@ class ServersNotifier extends _$ServersNotifier {
Stores.setting.serverOrder.put([]);
Stores.server.clear();
await Stores.connectionStats.clearAll();
bakSync.sync(milliDelay: 1000);
}

View File

@@ -41,7 +41,7 @@ final class ServersNotifierProvider
}
}
String _$serversNotifierHash() => r'277d1b219235f14bcc1b82a1e16260c2f28decdb';
String _$serversNotifierHash() => r'c90c2d8ce73a63f926bcf9679a84ae150c9d4808';
abstract class _$ServersNotifier extends $Notifier<ServersState> {
ServersState build();

View File

@@ -13,6 +13,7 @@ import 'package:server_box/data/helper/system_detector.dart';
import 'package:server_box/data/model/app/error.dart';
import 'package:server_box/data/model/app/scripts/script_consts.dart';
import 'package:server_box/data/model/app/scripts/shell_func.dart';
import 'package:server_box/data/model/server/connection_stat.dart';
import 'package:server_box/data/model/server/server.dart';
import 'package:server_box/data/model/server/server_private_info.dart';
import 'package:server_box/data/model/server/server_status_update_req.dart';
@@ -123,8 +124,8 @@ class ServerNotifier extends _$ServerNotifier {
}
}
final time1 = DateTime.now();
try {
final time1 = DateTime.now();
final client = await genClient(
spi,
timeout: Duration(seconds: Stores.setting.timeout.fetch()),
@@ -140,6 +141,18 @@ class ServerNotifier extends _$ServerNotifier {
Loggers.app.info('Jump to ${spi.name} in $spentTime ms.');
}
try {
await Stores.connectionStats.recordConnection(ConnectionStat(
serverId: spi.id,
serverName: spi.name,
timestamp: time1,
result: ConnectionResult.success,
durationMs: spentTime,
));
} catch (e) {
Loggers.app.warning('Failed to record connection success', e);
}
final sessionId = 'ssh_${spi.id}';
TermSessionManager.add(
id: sessionId,
@@ -152,6 +165,33 @@ class ServerNotifier extends _$ServerNotifier {
} catch (e) {
TryLimiter.inc(sid);
final durationMs = DateTime.now().difference(time1).inMilliseconds;
ConnectionResult failureResult;
final errStr = e.toString().toLowerCase();
if (errStr.contains('timed out') || errStr.contains('timeout')) {
failureResult = ConnectionResult.timeout;
} else if (errStr.contains('auth') || errStr.contains('authentication') || errStr.contains('permission denied') || errStr.contains('access denied')) {
failureResult = ConnectionResult.authFailed;
} else if (errStr.contains('connection refused') || errStr.contains('no route to host') || errStr.contains('network') || errStr.contains('socket')) {
failureResult = ConnectionResult.networkError;
} else {
failureResult = ConnectionResult.unknownError;
}
try {
await Stores.connectionStats.recordConnection(ConnectionStat(
serverId: spi.id,
serverName: spi.name,
timestamp: time1,
result: failureResult,
errorMessage: e.toString(),
durationMs: durationMs,
));
} catch (recErr) {
Loggers.app.warning('Failed to record connection failure', recErr);
}
final newStatus = state.status..err = SSHErr(type: SSHErrType.connect, message: e.toString());
updateStatus(newStatus);
updateConnection(ServerConn.failed);

View File

@@ -58,7 +58,7 @@ final class ServerNotifierProvider
}
}
String _$serverNotifierHash() => r'1bda6d0a9688ab843cf30803dafe3400379dc5c3';
String _$serverNotifierHash() => r'04b1beef4d96242fd10d5b523c6f5f17eb774bae';
final class ServerNotifierFamily extends $Family
with

View File

@@ -24,6 +24,7 @@ class SnippetNotifier extends _$SnippetNotifier {
}
void reload() {
Stores.snippet.invalidateCache();
final newState = _load();
if (newState == state) return;
state = newState;

View File

@@ -153,7 +153,8 @@ abstract final class GithubIds {
'aliferne',
'canronglan',
'nickgirga',
'xxnuo'
'xxnuo',
'sunnysu0608',
};
}

View File

@@ -45,6 +45,10 @@ abstract final class Stores {
getIt.registerLazySingleton<PortForwardStore>(() => PortForwardStore.instance);
await Future.wait(_allBackup.map((store) => store.init()));
if (connectionStats.indexDbKeys.isEmpty) {
await connectionStats.rebuildIndexAndCompact();
}
}
static int get lastModTime {

View File

@@ -1,48 +1,131 @@
import 'dart:io';
import 'package:fl_lib/fl_lib.dart';
import 'package:hive_ce/hive.dart';
import 'package:server_box/data/model/server/connection_stat.dart';
class ConnectionStatsStore extends HiveStore {
ConnectionStatsStore._() : super('connection_stats');
static final instance = ConnectionStatsStore._();
// Record a connection attempt
void recordConnection(ConnectionStat stat) {
final key = '${stat.serverId}_${ShortId.generate()}';
set(key, stat);
_cleanOldRecords(stat.serverId);
static const _indexBoxName = 'conn_stats_index';
static const _maxRecordsPerServer = 100;
late final Box<dynamic> _indexBox;
@override
Future<void> init() async {
await super.init();
_indexBox = await Hive.openBox(
_indexBoxName,
path: box.path?.substring(0, box.path!.lastIndexOf(Pfs.seperator)),
);
}
// Clean records older than 30 days for a specific server
void _cleanOldRecords(String serverId) {
Future<void> rebuildIndexAndCompact() async {
await _cleanAllOldAndRebuildIndex();
await _compactIfNeeded();
}
Future<void> _rebuildIndexCore() async {
final cutoffTime = DateTime.now().subtract(const Duration(days: 30));
final allKeys = keys().toList();
final keysToDelete = <String>[];
for (final key in allKeys) {
if (key.startsWith(serverId)) {
final parts = key.split('_');
if (parts.length >= 2) {
final timestamp = int.tryParse(parts.last);
if (timestamp != null) {
final recordTime = DateTime.fromMillisecondsSinceEpoch(timestamp);
if (recordTime.isBefore(cutoffTime)) {
keysToDelete.add(key);
}
}
final serverIdToKeys = <String, List<String>>{};
for (final key in keys().toList()) {
final stat = get<ConnectionStat>(key);
if (stat == null) continue;
if (stat.timestamp.isBefore(cutoffTime)) {
remove(key);
continue;
}
final serverId = stat.serverId;
serverIdToKeys.putIfAbsent(serverId, () => []).add(key);
}
final idxKeysToDelete = _indexBox.keys.where((k) => k.toString().startsWith('idx_')).toList();
for (final k in idxKeysToDelete) {
await _indexBox.delete(k);
}
for (final entry in serverIdToKeys.entries) {
final keys = entry.value;
if (keys.length > _maxRecordsPerServer) {
final keyStatPairs = <(String, ConnectionStat)>[];
for (final key in keys) {
final stat = get<ConnectionStat>(key);
if (stat != null) keyStatPairs.add((key, stat));
}
keyStatPairs.sort((a, b) => b.$2.timestamp.compareTo(a.$2.timestamp));
final toKeep = keyStatPairs.take(_maxRecordsPerServer).map((p) => p.$1).toList().reversed.toList();
final toRemove = keyStatPairs.skip(_maxRecordsPerServer);
for (final pair in toRemove) {
remove(pair.$1);
}
await _indexBox.put('idx_${entry.key}', toKeep);
} else {
await _indexBox.put('idx_${entry.key}', keys);
}
}
for (final key in keysToDelete) {
remove(key);
}
Future<void> _cleanAllOldAndRebuildIndex() async {
await _rebuildIndexCore();
}
Future<void> _compactIfNeeded() async {
try {
await box.compact();
await _indexBox.compact();
} catch (e, st) {
Loggers.app.warning('Auto compact failed during init', e, st);
}
}
// Get connection stats for a specific server
Future<void> _updateIndex(String serverId, String recordKey) async {
final indexKey = 'idx_$serverId';
final keys = (_indexBox.get(indexKey) as List?)?.cast<String>().toList() ?? [];
if (!keys.contains(recordKey)) {
keys.add(recordKey);
if (keys.length > _maxRecordsPerServer) {
await _pruneExcessRecords(serverId, keys);
}
await _indexBox.put(indexKey, keys);
}
}
Future<void> _pruneExcessRecords(String serverId, List<String> keys) async {
if (keys.length <= _maxRecordsPerServer) return;
final keyStatPairs = <(String, ConnectionStat)>[];
for (final key in keys) {
final stat = get<ConnectionStat>(key);
if (stat != null) {
keyStatPairs.add((key, stat));
}
}
keyStatPairs.sort((a, b) => b.$2.timestamp.compareTo(a.$2.timestamp));
final toRemove = keyStatPairs.skip(_maxRecordsPerServer);
for (final pair in toRemove) {
remove(pair.$1);
keys.remove(pair.$1);
}
}
Future<void> recordConnection(ConnectionStat stat) async {
final key = '${stat.serverId}_${stat.timestamp.millisecondsSinceEpoch}';
set(key, stat);
await _updateIndex(stat.serverId, key);
}
ServerConnectionStats getServerStats(String serverId, String serverName) {
final allStats = getConnectionHistory(serverId);
if (allStats.isEmpty) {
return ServerConnectionStats(
serverId: serverId,
@@ -54,12 +137,12 @@ class ConnectionStatsStore extends HiveStore {
successRate: 0.0,
);
}
final totalAttempts = allStats.length;
final successCount = allStats.where((s) => s.result.isSuccess).length;
final failureCount = totalAttempts - successCount;
final successRate = totalAttempts > 0 ? (successCount / totalAttempts) : 0.0;
final successTimes = allStats
.where((s) => s.result.isSuccess)
.map((s) => s.timestamp)
@@ -68,23 +151,22 @@ class ConnectionStatsStore extends HiveStore {
.where((s) => !s.result.isSuccess)
.map((s) => s.timestamp)
.toList();
DateTime? lastSuccessTime;
DateTime? lastFailureTime;
if (successTimes.isNotEmpty) {
successTimes.sort((a, b) => b.compareTo(a));
lastSuccessTime = successTimes.first;
}
if (failureTimes.isNotEmpty) {
failureTimes.sort((a, b) => b.compareTo(a));
lastFailureTime = failureTimes.first;
}
// Get recent connections (last 20)
final recentConnections = allStats.take(20).toList();
return ServerConnectionStats(
serverId: serverId,
serverName: serverName,
@@ -97,108 +179,98 @@ class ConnectionStatsStore extends HiveStore {
successRate: successRate,
);
}
// Get connection history for a specific server
List<ConnectionStat> getConnectionHistory(String serverId) {
final allKeys = keys().where((key) => key.startsWith(serverId)).toList();
final indexKey = 'idx_$serverId';
final keys = (_indexBox.get(indexKey) as List?)?.cast<String>() ?? [];
final stats = <ConnectionStat>[];
for (final key in allKeys) {
final stat = get<ConnectionStat>(
key,
fromObj: (val) {
if (val is ConnectionStat) return val;
if (val is Map<dynamic, dynamic>) {
final map = val.toStrDynMap;
if (map == null) return null;
try {
return ConnectionStat.fromJson(map as Map<String, dynamic>);
} catch (e) {
dprint('Parsing ConnectionStat from JSON', e);
}
}
return null;
},
);
for (final key in keys) {
final stat = get<ConnectionStat>(key);
if (stat != null) {
stats.add(stat);
}
}
// Sort by timestamp, newest first
stats.sort((a, b) => b.timestamp.compareTo(a.timestamp));
return stats;
}
// Get all servers' stats
List<ServerConnectionStats> getAllServerStats() {
final serverIds = <String>{};
final serverNames = <String, String>{};
// Get all unique server IDs
for (final key in keys()) {
final parts = key.split('_');
if (parts.length >= 2) {
final serverId = parts[0];
serverIds.add(serverId);
// Try to get server name from the stored stat
final stat = get<ConnectionStat>(
key,
fromObj: (val) {
if (val is ConnectionStat) return val;
if (val is Map<dynamic, dynamic>) {
final map = val.toStrDynMap;
if (map == null) return null;
try {
return ConnectionStat.fromJson(map as Map<String, dynamic>);
} catch (e) {
dprint('Parsing ConnectionStat from JSON', e);
}
}
return null;
},
);
final indexKeys = _indexBox.keys
.where((k) => k is String && k.startsWith('idx_'))
.cast<String>()
.toList();
final allStats = <ServerConnectionStats>[];
for (final indexKey in indexKeys) {
final serverId = indexKey.substring(4);
final keys = (_indexBox.get(indexKey) as List?)?.cast<String>() ?? [];
if (keys.isEmpty) continue;
String? serverName;
for (final key in keys.reversed) {
final stat = get<ConnectionStat>(key);
if (stat != null) {
serverNames[serverId] = stat.serverName;
serverName = stat.serverName;
break;
}
}
}
final allStats = <ServerConnectionStats>[];
for (final serverId in serverIds) {
final serverName = serverNames[serverId] ?? serverId;
if (serverName == null) continue;
final stats = getServerStats(serverId, serverName);
allStats.add(stats);
}
return allStats;
}
// Clear all connection stats
void clearAll() {
box.clear();
Future<void> clearAll() async {
await box.clear();
await _indexBox.clear();
}
// Clear stats for a specific server
void clearServerStats(String serverId) {
final keysToDelete = keys().where((key) {
if (key == serverId) return true;
return key.startsWith('${serverId}_');
}).toList();
for (final key in keysToDelete) {
Future<void> clearServerStats(String serverId) async {
final indexKey = 'idx_$serverId';
final keys = (_indexBox.get(indexKey) as List?)?.cast<String>() ?? [];
for (final key in keys) {
remove(key);
}
await _indexBox.delete(indexKey);
}
Future<void> compact() async {
Loggers.app.info('Start compacting connection_stats database...');
try {
await box.compact();
await _indexBox.compact();
Loggers.app.info('Finished compacting connection_stats database');
} catch (e, st) {
Loggers.app.warning('Failed compacting connection_stats database', e, st);
rethrow;
}
}
}
String? get dbPath => box.path;
String? get indexDbPath => _indexBox.path;
Iterable<dynamic> get indexDbKeys => _indexBox.keys.where((k) => k.toString().startsWith('idx_'));
Future<int> dbSizeAsync() async {
final path = dbPath;
if (path == null) return 0;
final file = File(path);
return await file.exists() ? await file.length() : 0;
}
Future<int> indexDbSizeAsync() async {
final path = indexDbPath;
if (path == null) return 0;
final file = File(path);
return await file.exists() ? await file.length() : 0;
}
}

View File

@@ -1,3 +1,5 @@
import 'dart:async';
import 'package:fl_lib/fl_lib.dart';
import 'package:server_box/data/model/server/private_key_info.dart';
@@ -7,44 +9,112 @@ class PrivateKeyStore extends HiveStore {
static final instance = PrivateKeyStore._();
List<PrivateKeyInfo>? _cache;
StreamSubscription<dynamic>? _boxWatchSub;
bool _suppressWatch = false;
@override
Future<void> init() async {
await super.init();
await _boxWatchSub?.cancel();
_boxWatchSub = box.watch().listen((_) {
if (!_suppressWatch) {
_cache = null;
}
});
}
@override
bool clear({bool? updateLastUpdateTsOnClear}) {
_suppressWatch = true;
try {
_cache = null;
return super.clear(updateLastUpdateTsOnClear: updateLastUpdateTsOnClear);
} finally {
_suppressWatch = false;
}
}
void invalidateCache() {
_cache = null;
}
void put(PrivateKeyInfo info) {
set(info.id, info);
_suppressWatch = true;
try {
set(info.id, info);
_cache = null;
} finally {
_suppressWatch = false;
}
}
void _putWithoutInvalidatingCache(PrivateKeyInfo info) {
_suppressWatch = true;
try {
box.put(info.id, info);
} finally {
_suppressWatch = false;
}
}
List<PrivateKeyInfo> fetch() {
return List<PrivateKeyInfo>.from(_cache ??= _loadAll());
}
List<PrivateKeyInfo> _loadAll() {
final ps = <PrivateKeyInfo>[];
final toPersist = <PrivateKeyInfo>[];
for (final key in keys()) {
final s = get<PrivateKeyInfo>(
key,
fromObj: (val) {
if (val is PrivateKeyInfo) return val;
if (val is Map<dynamic, dynamic>) {
final map = val.toStrDynMap;
if (map == null) return null;
try {
final pki = PrivateKeyInfo.fromJson(map as Map<String, dynamic>);
put(pki);
return pki;
} catch (e) {
dprint('Parsing PrivateKeyInfo from JSON', e);
}
}
return null;
},
fromObj: (val) => _decodePrivateKeyInfo(val, toPersist: toPersist),
);
if (s != null) {
ps.add(s);
}
}
for (final pki in toPersist) {
_putWithoutInvalidatingCache(pki);
}
return ps;
}
PrivateKeyInfo? _decodePrivateKeyInfo(dynamic val, {List<PrivateKeyInfo>? toPersist}) {
if (val is PrivateKeyInfo) return val;
if (val is Map<dynamic, dynamic>) {
final map = val.toStrDynMap;
if (map == null) return null;
try {
final pki = PrivateKeyInfo.fromJson(map as Map<String, dynamic>);
if (toPersist != null) {
toPersist.add(pki);
}
return pki;
} catch (e) {
dprint('Parsing PrivateKeyInfo from JSON', e);
}
}
return null;
}
PrivateKeyInfo? fetchOne(String? id) {
if (id == null) return null;
return box.get(id);
if (_cache != null) {
for (final pki in _cache!) {
if (pki.id == id) return pki;
}
}
return _decodePrivateKeyInfo(box.get(id));
}
void delete(PrivateKeyInfo s) {
remove(s.id);
_suppressWatch = true;
try {
remove(s.id);
_cache = null;
} finally {
_suppressWatch = false;
}
}
}

View File

@@ -1,3 +1,5 @@
import 'dart:async';
import 'package:fl_lib/fl_lib.dart';
import 'package:server_box/data/model/server/server_private_info.dart';
@@ -10,11 +12,60 @@ class ServerStore extends HiveStore {
static final instance = ServerStore._();
List<Spi>? _cache;
StreamSubscription<dynamic>? _boxWatchSub;
bool _suppressWatch = false;
@override
Future<void> init() async {
await super.init();
_boxWatchSub?.cancel();
_boxWatchSub = box.watch().listen((_) {
if (!_suppressWatch) {
_cache = null;
}
});
}
@override
bool clear({bool? updateLastUpdateTsOnClear}) {
_suppressWatch = true;
try {
_cache = null;
return super.clear(updateLastUpdateTsOnClear: updateLastUpdateTsOnClear);
} finally {
_suppressWatch = false;
}
}
void invalidateCache() {
_cache = null;
}
void put(Spi info) {
set(info.id, info);
_suppressWatch = true;
try {
set(info.id, info);
_cache = null;
} finally {
_suppressWatch = false;
}
}
void _putWithoutInvalidatingCache(Spi info) {
_suppressWatch = true;
try {
box.put(info.id, info);
} finally {
_suppressWatch = false;
}
}
List<Spi> fetch() {
return List<Spi>.from(_cache ??= _loadAll());
}
List<Spi> _loadAll() {
final List<Spi> ss = [];
for (final id in keys()) {
final s = get<Spi>(
@@ -26,7 +77,7 @@ class ServerStore extends HiveStore {
if (map == null) return null;
try {
final spi = Spi.fromJson(map as Map<String, dynamic>);
put(spi);
_putWithoutInvalidatingCache(spi);
return spi;
} catch (e) {
dprint('Parsing Spi from JSON', e);
@@ -43,15 +94,27 @@ class ServerStore extends HiveStore {
}
void delete(String id) {
remove(id);
_suppressWatch = true;
try {
remove(id);
_cache = null;
} finally {
_suppressWatch = false;
}
}
void update(Spi old, Spi newInfo) {
if (!have(old)) {
throw Exception('Old spi: $old not found');
}
delete(old.id);
put(newInfo);
_suppressWatch = true;
try {
remove(old.id);
set(newInfo.id, newInfo);
_cache = null;
} finally {
_suppressWatch = false;
}
}
bool have(Spi s) => get(s.id) != null;
@@ -60,12 +123,9 @@ class ServerStore extends HiveStore {
final ss = fetch();
final idMap = <String, String>{};
// Collect all old to new ID mappings
for (final s in ss) {
final newId = s.migrateId();
if (newId == null) continue;
// Use s.oldId as the key, because s.id would be empty for a server being migrated.
// s.oldId represents the identifier used before migration.
idMap[s.oldId] = newId;
}
@@ -74,23 +134,19 @@ class ServerStore extends HiveStore {
final container = ContainerStore.instance;
bool srvOrderChanged = false;
// Update all references to the servers
for (final e in idMap.entries) {
final oldId = e.key;
final newId = e.value;
// Replace ids in ordering settings.
final srvIdx = srvOrder.indexOf(oldId);
if (srvIdx != -1) {
srvOrder[srvIdx] = newId;
srvOrderChanged = true;
}
// Replace ids in jump server settings.
final spi = get<Spi>(newId);
if (spi != null) {
final jumpId = spi.jumpId; // This could be an oldId.
// Check if this jumpId corresponds to a server that was also migrated.
final jumpId = spi.jumpId;
if (jumpId != null && idMap.containsKey(jumpId)) {
final newJumpId = idMap[jumpId];
if (spi.jumpId != newJumpId) {
@@ -100,7 +156,6 @@ class ServerStore extends HiveStore {
}
}
// Replace ids in [Snippet]
for (final snippet in snippets) {
final autoRunsOn = snippet.autoRunOn;
final idx = autoRunsOn?.indexOf(oldId);
@@ -112,7 +167,6 @@ class ServerStore extends HiveStore {
}
}
// Replace ids in [Container]
final dockerHost = container.fetch(oldId);
if (dockerHost != null) {
container.remove(oldId);
@@ -120,8 +174,16 @@ class ServerStore extends HiveStore {
}
}
for (final spi in ss) {
if (spi.jumpId != null && idMap.containsKey(spi.jumpId)) {
final newJumpId = idMap[spi.jumpId]!;
final newSpi = spi.copyWith(jumpId: newJumpId);
update(spi, newSpi);
}
}
if (srvOrderChanged) {
SettingStore.instance.serverOrder.put(srvOrder);
}
}
}
}

View File

@@ -1,3 +1,5 @@
import 'dart:async';
import 'package:fl_lib/fl_lib.dart';
import 'package:server_box/data/model/server/snippet.dart';
@@ -7,11 +9,60 @@ class SnippetStore extends HiveStore {
static final instance = SnippetStore._();
List<Snippet>? _cache;
StreamSubscription<dynamic>? _boxWatchSub;
bool _suppressWatch = false;
@override
Future<void> init() async {
await super.init();
_boxWatchSub?.cancel();
_boxWatchSub = box.watch().listen((_) {
if (!_suppressWatch) {
_cache = null;
}
});
}
@override
bool clear({bool? updateLastUpdateTsOnClear}) {
_suppressWatch = true;
try {
_cache = null;
return super.clear(updateLastUpdateTsOnClear: updateLastUpdateTsOnClear);
} finally {
_suppressWatch = false;
}
}
void invalidateCache() {
_cache = null;
}
void put(Snippet snippet) {
set(snippet.name, snippet);
_suppressWatch = true;
try {
set(snippet.name, snippet);
_cache = null;
} finally {
_suppressWatch = false;
}
}
void _putWithoutInvalidatingCache(Snippet snippet) {
_suppressWatch = true;
try {
box.put(snippet.name, snippet);
} finally {
_suppressWatch = false;
}
}
List<Snippet> fetch() {
return List<Snippet>.from(_cache ??= _loadAll());
}
List<Snippet> _loadAll() {
final ss = <Snippet>{};
for (final key in keys()) {
final s = get<Snippet>(
@@ -23,7 +74,7 @@ class SnippetStore extends HiveStore {
if (map == null) return null;
try {
final snippet = Snippet.fromJson(map as Map<String, dynamic>);
put(snippet);
_putWithoutInvalidatingCache(snippet);
return snippet;
} catch (e) {
dprint('Parsing Snippet from JSON', e);
@@ -40,16 +91,28 @@ class SnippetStore extends HiveStore {
}
void delete(Snippet s) {
remove(s.name);
_suppressWatch = true;
try {
remove(s.name);
_cache = null;
} finally {
_suppressWatch = false;
}
}
void update(Snippet old, Snippet newInfo) {
if (!have(old)) {
throw Exception('Old snippet: $old not found');
}
delete(old);
put(newInfo);
_suppressWatch = true;
try {
remove(old.name);
set(newInfo.name, newInfo);
_cache = null;
} finally {
_suppressWatch = false;
}
}
bool have(Snippet s) => get(s.name) != null;
}
}