3x-ui/web/service/inbound.go
Sanaei bc00d37ad8
Some checks are pending
Release 3X-UI / Analyze Go code (push) Waiting to run
Release 3X-UI / build (386) (push) Blocked by required conditions
Release 3X-UI / build (amd64) (push) Blocked by required conditions
Release 3X-UI / build (arm64) (push) Blocked by required conditions
Release 3X-UI / build (armv5) (push) Blocked by required conditions
Release 3X-UI / build (armv6) (push) Blocked by required conditions
Release 3X-UI / build (armv7) (push) Blocked by required conditions
Release 3X-UI / build (s390x) (push) Blocked by required conditions
Release 3X-UI / Build for Windows (push) Blocked by required conditions
Vue3 migration (#4198)
* docs(migration): Phase 1 inventory — Vue 2 / AD-Vue 1 surface area

Captures the breakage surface for the Vue 3 + Ant Design Vue 4 + Vite
migration: 17,650 lines across 69 templates, 3,145 a-* component
instances across 63 files, with per-pattern counts and file lists.

Key findings:
- No Vue filters anywhere — dodges a major Vue 3 breaking change
- 358 v-model uses; AD-Vue 4 absorbs most, custom components don't
- 233 <template slot="X"> usages must become <template #X>
- 49 scopedSlots: { ... } column defs need new slots: { ... } shape
- a-icon is removed in AD-Vue 4 — every icon must be imported

Establishes the 8-phase order; Phase 2 (Vite toolchain) is next.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* build(frontend): Phase 2 — scaffold Vite + Vue 3 + AD-Vue 4

Adds a frontend/ directory that lives alongside the legacy web/html/
Vue 2 templates during the migration. Vite builds into ../web/dist/
so the Go binary will be able to embed the result via embed.FS once
Phase 4 starts moving real pages over.

- package.json pins Vue 3.5, Ant Design Vue 4.2, Vite 6, vue-i18n 10
- vite.config.js: dev server on :5173 with API proxy to the Go panel
  on :2053; build output to ../web/dist/
- src/App.vue is currently a smoke-test placeholder — delete once the
  first real page (login) lands in Phase 4
- node_modules and dist are already ignored at repo root

To verify locally:
  cd frontend && npm install && npm run dev

Pages will be migrated one at a time on the vue3-migration branch.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* refactor(frontend): Phase 3 — port utils, models, axios, websocket as ES modules

Ports the framework-agnostic JS from web/assets/js/ into frontend/src/
so Vue 3 pages can import what they need without relying on script-tag
globals.

- web/assets/js/util/index.js (927 lines, 21 classes) →
  frontend/src/utils/legacy.js + a barrel at utils/index.js. All
  classes are now named exports.
- Vue.prototype.$message in HttpUtil → direct import of `message`
  from ant-design-vue (Vue 3 has no Vue.prototype).
- RandomUtil.randomShadowsocksPassword previously defaulted to
  SSMethods.BLAKE3_AES_256_GCM from inbound.js, creating a circular
  import. Replaced with the literal string default.
- MediaQueryMixin (Vue 2 mixin) removed. Replaced by
  composables/useMediaQuery.js — Vue 3 composable returning reactive
  `isMobile`.
- axios-init.js wrapped as setupAxios(); Qs global → npm `qs`.
- websocket.js exported as WebSocketClient class; the implicit
  window.wsClient global is gone — pages instantiate it themselves.
- model/{inbound,outbound,dbinbound,setting,reality_targets}.js
  copied with `export` added on every top-level declaration. Imports
  between models and utils are wired up explicitly.
- subscription.js deferred to Phase 5 (it's a Vue 2 mount, not a util).
- App.vue smoke test exercises SizeFormatter / RandomUtil / Wireguard /
  useMediaQuery so the user can verify Phase 3 with `npm run dev`.

Run `cd frontend && npm install && npm run dev` — qs was added so a
fresh install is required.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 4 — port login.html to Vue 3 + AD-Vue 4 + Vite 8

First real page in the new toolchain. Multi-page Vite: each migrated
page is its own entry. login.html now lives at frontend/login.html with
a thin entrypoint at frontend/src/login.js mounting LoginPage.vue.

Vite 6 → Vite 8.0.11 (per user request). Requires Node 20.19+ or 22.12+.
@vitejs/plugin-vue bumped to ^6.0.6 (peers vite ^8). Ant Design Vue
stays on 4.2.6 — there is no AD-Vue 6.

Vue 2 → Vue 3 / AD-Vue 1 → AD-Vue 4 syntax changes hit on this page:
- new Vue({ el, delimiters, data, methods }) → createApp + <script setup>
- mounted() → onMounted()
- <template slot="X"> → <template #X>
- <a-icon slot="prefix" type="user"> → <template #prefix><UserOutlined />
  </template> with explicit @ant-design/icons-vue imports
- v-model.trim → v-model:value (AD-Vue 4 uses named v-model on inputs)

Three legacy features deferred so Phase 4 stays small:
- i18n (Phase 7 wires up vue-i18n)
- theme switcher (custom component pending Phase 5)
- headline word-cycle animation (purely aesthetic)

Run `cd frontend && npm install && npm run dev`, open
http://localhost:5173/login.html. With Go panel running on :2053 the
form submits real credentials via the configured proxy.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5a — theme system + Vite 8 + vue-i18n 11

Bumps Vite to 8.0.11 (npm install picked up 6.4.2 from the stale
lockfile; clean install resolves the new constraint). Bumps vue-i18n
to 11.1.4 since v10 was just EOL'd.

Migrates aThemeSwitch.html — the two-flavor theme picker + global
themeSwitcher object — into:

- composables/useTheme.js: single reactive `theme` state with
  toggleTheme / toggleUltra. Boot side-effect applies the stored theme
  to <body>/<html> before Vue renders; watchEffect persists changes
  back to localStorage.
- components/ThemeSwitch.vue: full menu version for the main panel.
- components/ThemeSwitchLogin.vue: login-popover version.

AD-Vue 1 → 4 changes hit on this component:
- <a-icon type="bulb" :theme="filled|outlined"> dropped — replaced by
  explicit BulbFilled / BulbOutlined imports from
  @ant-design/icons-vue, swapped via <component :is="BulbIcon">
- Vue.component('a-theme-switch', { ... }) global registration → SFC
  + per-page import
- this.$message.config(...) (Vue 2 instance method) → message.config(...)
  imported from ant-design-vue, called once in login.js at boot

Login page now surfaces a settings button → popover → theme picker.

Known gap: web/assets/css/custom.min.css isn't yet imported into the
new bundle, so toggling dark mode currently only re-themes AD-Vue's
own components, not the panel chrome. The body class is still toggled
so behavior is correct; visual fidelity returns when custom.css is
ported or directly imported.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5b — port four shared components to Vue 3

CustomStatistic.vue and SettingListItem.vue are mechanical
Vue.component → SFC ports.

AppSidebar.vue: AD-Vue 4 dropped <a-icon :type="dynamic">, so the
five sidebar icons (dashboard/user/setting/tool/logout) live in a
name→component map and render via <component :is>. The legacy
<a-drawer slot="handle"> hack is replaced with a sibling fixed-
position toggle button. Tab paths take basePath/requestUri as
props instead of pulling them from Go template scope.

TableSortable.vue: the biggest Vue 3 rewrite of this phase.

  - $listeners is gone — replaced by inheritAttrs: false +
    explicit attrs forwarding
  - scopedSlots: this.$scopedSlots collapsed into Vue 3's unified
    slots object — just iterate Object.keys(this.slots) and forward
  - Vue 2 h(tag, { props, on, scopedSlots }, children) →
    Vue 3 h(tag, { ...props, ...on }, slotsObject)
  - 'a-table' string → resolveComponent('a-table') so app.use(Antd)
    registration is honored
  - inject: ['sortable'] (Options API) → inject('sortable', null)
    (Composition API) inside the trigger child
  - beforeDestroy → beforeUnmount
  - customRow's return shape flattened (no nested props/on/class)

Two intentional skips, documented in the migration doc:

  - aClientTable.html — slot fragments, not a component. Migrates
    inline with inbounds.html (new Phase 5f).
  - aPersianDatepicker.html — wraps a Persian-only third-party
    lib; defer until settings.html lands.

Build verified with vite 8.0.11.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): anchor Vite dev proxy so /login.html isn't forwarded

The /login proxy entry was matching any path starting with /login —
including /login.html, which Vite is supposed to serve itself. Without
the Go backend running, this caused ECONNREFUSED noise on every page
load.

Switched to regex patterns anchored with ^...$ so only the bare backend
paths (/login, /logout, /getTwoFactorEnable) and explicit sub-routes
(/panel/*, /server/*) get proxied. Static .html files Vite serves
directly are no longer matched.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): real dark mode + silence dev proxy ECONNREFUSED noise

Two issues from running login.html against no Go backend:

1. Dark mode toggled the body class but didn't actually re-theme any
   AD-Vue components. The legacy panel relied on custom.min.css which
   we haven't ported. AD-Vue 4 ships its own dark algorithm — wrap
   LoginPage in <a-config-provider :theme="{ algorithm }"> driven by
   our useTheme state, and AD-Vue restyles every component for free.
   Page chrome (background, card, title) gets explicit .is-dark CSS
   since the algorithm only covers AD-Vue components.

2. Vite logged every failed proxy attempt loudly. When the Go panel
   isn't running locally that's pure noise. Added a configure()
   callback that swallows ECONNREFUSED specifically; real errors
   (timeouts, 5xx, anything else) still surface.

Both fixes are dev-experience only — production build is unchanged.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): use legacy panel palette for login page dark mode

Earlier dark mode used invented colors (#141a26 page bg, #1f2937 card)
that didn't match the rest of the panel. Replaced with the actual
values from web/assets/css/custom.min.css:

  light          dark             ultra-dark
  bg #c7ebe2     bg #222d42       bg #0f2d32
  card #fff      card #151f31     card #0c0e12
  title #008771  title #fff/.92   title #fff/.92

Drove everything off CSS custom properties on .login-app so the
.is-dark / .is-ultra class swap is a few var overrides instead of
duplicating selectors. Also restored the legacy card metrics
(2rem radius, 4rem 3rem padding, 2rem title) so the new page
matches the old panel's geometry, not just its colors.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): match legacy wave layout + recolor for dark mode

The wave SVG had inline fill="#c7ebe2" (mint) on the bottom wave, so
in dark/ultra-dark mode it rendered as a pale-white blob against the
dark page. Stripped the inline fills, drove them off CSS variables
that swap with .is-dark / .is-ultra:

  light:      green tints + #c7ebe2 (mint) on the bottom wave
  dark:       #222d42 across all four waves
  ultra-dark: #0f2d32

The wave was also positioned wrong — anchored to the top 200px of
the viewport with absolute positioning. Restored the legacy layout:
  - .waves-header is fixed to the top of the viewport with z-index -1
    so the form floats over it
  - .waves-inner-header pushes the wave SVG down to ~50vh with a
    50vh-tall solid block of the page color
  - .waves SVG itself is 15vh tall, sitting at the bottom of that block

Net effect: top half is solid-colored, then a wavy edge transitions
into the rest of the page, with the form centered on top — matching
the legacy panel exactly.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): bring wave-header to front so the wave actually shows

Two layering bugs were hiding the wave entirely:

1. .ant-layout-content had background: var(--bg-page) which painted an
   opaque rectangle covering the full content area — including the
   fixed wave-header behind it. Made the layout/content transparent
   and moved the bg paint up to .login-app (the outer ant-layout).

2. .waves-header had z-index: -1 which on its own was fine, but with
   .ant-layout-content opaque on top it was doubly buried. Promoted
   the wave-header to z-index: 0 and gave the form .login-row
   z-index: 1, so the form sits above the wave and the wave sits
   above the page-bg.

Also set --bg-page to the legacy mint (#c7ebe2) for light mode so the
bottom half of the page below the wave matches the legacy panel
(was white). Dark mode stays at the surface-100/login-wave palette.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): match legacy wave animation timings + dark page bg

Two reasons the bottom wave looked static in dark/ultra-dark:

1. Animation durations were 7s/10s/13s/20s. Legacy uses 4s/7s/10s/13s.
   The 20s on the bottom wave was so slow that against the low dark-
   mode contrast it read as motionless. Restored the legacy timings.

2. --bg-page in dark mode was #151f31 (card color / surface-100), but
   the legacy .under uses surface-200 (#222d42) — that's the color of
   the bottom half of the page, the same as the wave fill, so the
   wave appears to flow into the page rather than meeting a hard edge.
   Now it does.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): restore Hello/Welcome headline cycle on login

Earlier I deferred the legacy headline word-cycling animation as
"purely aesthetic". Restored it: the title now alternates between
'Hello' and 'Welcome' every 2 seconds, matching the legacy panel.

The legacy implementation toggled .is-visible / .is-hidden classes on
two <b> elements via setTimeout chains and DOM querying. Replaced
with a reactive ref + Vue 3 <Transition mode="out-in"> so the fade
between words is declarative — no manual DOM manipulation, and the
interval is properly cleaned up in onBeforeUnmount.

The earlier "Welcome to 3x-ui" string was wrong on two counts: it
should be just "Welcome", and it should be one of two cycling words
with "Hello" preceding it.

Ultra-dark palette already matched legacy after the prior wave timing
fix; no additional changes needed there beyond the animation speeds
that now also apply to ultra-dark via the shared CSS rules.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): correct dark login bg + give ultra-dark wave real contrast

Two related fixes:

1. Default-dark wave-header bg was wrong. I had #0a2227, but that's
   the *ultra-dark* override; default dark uses --dark-color-background
   = #0a1222. Now the dark-mode top half is the legacy purple-blue
   instead of teal.

2. Ultra-dark wave fill is intentionally near-identical to its bg in
   the legacy palette (#0f2d32 vs #0a2227, ~5/11/11 RGB delta), which
   makes the wave look static even though the animation is running.
   Bumped --wave-fill / --wave-fill-bottom to #1f4d52 in ultra-dark
   only — far enough above the bg that the motion reads, while
   staying within the same teal hue family.

Also corrected ultra-dark --bg-page back to #0f2d32 (was briefly
#0c0e12, which is the card color, not the page color).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): drop ultra-dark bottom-wave seam line

Last fix made the wave fill #1f4d52 in ultra-dark for both top-three
waves and the bottom wave, which gave visible motion but exposed a
hard horizontal line where the bottom wave's flat lower edge met the
page bg (#0f2d32). The user noticed it as "the wave at the bottom
not moving its like a line" — they were seeing the SVG's clipped
bottom edge, not the wave itself.

Solution: only the top three waves get the brighter fill (those carry
the visible motion). The bottom wave reverts to #0f2d32 = --bg-page,
so its flat bottom edge merges seamlessly into the page below. Net
effect: motion is still visible (from waves 2 and 3), and there's no
seam line at the bottom of the SVG.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5c-i — index.html dashboard shell

Replaces the smoke-test App.vue with a real IndexPage shell so the
/index.html route now boots the actual dashboard layout in Vue 3:

- a-config-provider drives AD-Vue 4's dark algorithm from useTheme
  (same pattern as LoginPage)
- AppSidebar (Phase 5b component) is wired in with basePath +
  requestUri props
- a-spin loading state with placeholder card while we build out the
  rest of the page
- Page palette mirrors the legacy: light #f0f2f5, dark #0a1222
  (--dark-color-background), ultra-dark #21242a

The 1,805-line legacy index.html is too big for one commit. Split
into five sub-phases on the todo list: ii) status cards + /server/status
polling, iii) xray status card, iv) logs/backup/panel-update modals,
v) custom-geo section.

frontend/src/App.vue and frontend/src/main.js (smoke-test scaffold)
are removed — both purposes now served by IndexPage and index.js.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5c-ii — live status cards on the dashboard

Adds the CPU / memory / swap / disk dashboard cards to IndexPage,
backed by a useStatus() composable that polls /panel/api/server/status
every 2 s and a Status / CurTotal model ported from the legacy inline
classes in index.html.

- models/status.js — Status & CurTotal classes (CurTotal exposes
  reactive .percent and .color computed-style getters; Status maps
  the API payload + xray state to color/message strings)
- composables/useStatus.js — 2s polling with shallowRef so each fetch
  swaps the whole Status object atomically. WebSocket integration
  intentionally deferred — the legacy panel falls back to this same
  2s polling when its websocket drops, so we ship the proven path
  first and add WS on top in a later sub-phase.
- pages/index/StatusCard.vue — four a-progress dashboard widgets in
  a 2x2 grid (mobile collapses to a 1x4). CPU widget exposes a
  history button; the modal it opens is part of 5c-iv.
- IndexPage now consumes both, plus useMediaQuery so the layout
  responds to viewport changes.

AD-Vue 4 changes: <a-icon type="area-chart"|"history"> dropped in
favor of explicit AreaChartOutlined / HistoryOutlined imports.
<a-tooltip slot="title"> → <template #title>.

i18n strings still hardcoded English (Phase 7 wires up vue-i18n).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5c-iii — xray status card + stop/restart controls

XrayStatusCard.vue renders the right-hand card on the dashboard:

- Title with mobile-only version tag (matches the legacy collapse)
- Animated badge for the running/stop/error states. The pulsing dot
  comes from xray-pulse keyframes (renamed from runningAnimation in
  legacy custom.min.css). Color rings on the badge use the legacy's
  per-state border-color overrides on .ant-badge-status-processing.
- Error state replaces the badge with a popover that surfaces the
  multi-line errorMsg + a logs shortcut.
- Action row at the bottom: optional logs (when ipLimitEnable),
  stop, restart, and version switch.

IndexPage now wires:
- POST /panel/api/server/stopXrayService and /restartXrayService,
  followed by a refresh() so the status card reflects the new state
  without waiting for the next poll tick
- POST /panel/setting/defaultSettings to read ipLimitEnable
- Stub handlers for the panel-logs / xray-logs / version-switch /
  cpu-history modals — those land in 5c-iv

AD-Vue 4 changes hit on this card:
- <a-icon type="bars|poweroff|reload|tool"> → explicit
  BarsOutlined / PoweroffOutlined / ReloadOutlined / ToolOutlined
- <span slot="title|content"> → <template #title|#content>
- The .xray-*-animation classes ship as global <style> (not scoped)
  so they pierce AD-Vue's internal .ant-badge-status-* DOM.

i18n still hardcoded English; Phase 7 wires vue-i18n.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5c-iv (a) — panel update / logs / backup modals

Adds three of the six dashboard modals plus a Quick Actions card
that surfaces them. The remaining three (xray logs, version picker,
CPU history sparkline) ship in 5c-iv-b.

- PanelUpdateModal.vue — current vs latest version, "update now"
  button. Confirm dialog → POST /panel/api/server/updatePanel,
  then poll /server/status for up to 90s until the new panel
  answers, then reload.
- LogModal.vue — panel logs viewer. Filters: rows (10-500), level
  (debug/info/notice/warning/error), syslog toggle. Auto-fetches
  on open and on every filter change. Color-coded timestamps and
  levels via inline span styles. Download button writes the raw
  log to x-ui.log via FileManager.downloadTextFile.
- BackupModal.vue — db export (window.location to /getDb) and
  import (FormData upload to /importDB, then panel restart + reload).
- Quick Actions card surfaces Logs / Backup / Update buttons and
  shows an orange update badge (extra slot) when an update is
  available.

Modal-busy pattern: long-running operations (update, import) emit
a `busy` event with a tip; IndexPage flips its a-spin overlay so the
user sees a loading message while the panel is restarting.

AD-Vue 4 changes:
- v-model on <a-modal> renamed to v-model:open
- v-model on <a-input>/<a-select>/<a-checkbox> uses the named
  v-model:value / v-model:checked pattern
- <a-icon type="..."> dropped — explicit Ant icon imports
  (BarsOutlined, CloudServerOutlined, CloudDownloadOutlined,
  DownloadOutlined, UploadOutlined, SyncOutlined)
- Modal.confirm() replaces this.$confirm() since setup() has no `this`

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5c-iv (b) — cpu-history / xray-logs / xray-version modals

Wires up the three remaining dashboard buttons that were stubbed in
5c-iv (a): the CPU history button on StatusCard, the xray-logs button
in XrayStatusCard's error popover and ipLimitEnable action, and the
"Switch xray" button in XrayStatusCard's action footer.

- Sparkline.vue: shared SVG line chart (composition-API port of the
  inline Vue 2 component). Per-instance gradient id avoids defs
  collisions between sparklines on the same page.
- CpuHistoryModal.vue: bucket dropdown (2m/30m/1h/2h/3h/5h) drives
  GET /panel/api/server/cpuHistory/{bucket}; renders via Sparkline.
- XrayLogModal.vue: rows + filter + direct/blocked/proxy checkboxes;
  POST /panel/api/server/xraylogs/{rows} returns access-log entries
  rendered as a colored HTML table; download button serializes to text.
- VersionModal.vue: collapse with Xray panel (radio list of versions
  from getXrayVersion, install via installXray/{version}) and Geofiles
  panel (per-file reload + Update all). CustomGeo collapse panel is
  Phase 5c-v.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5c-v — custom-geo section in VersionModal

Adds the third collapse panel ("Custom geo") that lets users register
external geosite/geoip files referenced by routing rules via
ext:<filename>:tag. Backend endpoints are unchanged.

- CustomGeoSection.vue: bordered table over /panel/api/custom-geo/list
  with per-row edit, download (refetch), and delete actions, plus an
  Add button and Update-all. Lazy-loads the list when the parent
  collapse opens this panel — closed panels don't fetch.
- CustomGeoFormModal.vue: shared add/edit form with the same alias
  regex (^[a-z0-9_-]+$) and URL validation as legacy. Type and alias
  are immutable when editing — backend rejects changes anyway.
- ext:<filename>:tag value is click-to-copy via ClipboardManager.
- Relative time is computed inline (no moment dep); tooltip shows the
  absolute timestamp.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5d-i — settings page shell + dirty tracking

Adds the settings entry as a new Vite multi-page input. Lays down the
shared page chrome (sidebar, save bar, restart, security alert) and the
AllSetting fetch/dirty-poll lifecycle so 5d-ii through 5d-vi can drop
in tab partials without re-implementing it.

- settings.html + src/settings.js: third Vite entry; mounts SettingsPage.
- SettingsPage.vue: page chrome with the legacy two-button save/restart
  bar, conf-alerts banner, and 5 a-tabs (4 always-visible + the formats
  tab gated on subJsonEnable || subClashEnable). Each tab body is an
  a-empty placeholder until 5d-ii…vi fill them in.
- useAllSetting.js composable: POST /panel/setting/all on mount, mirrors
  the legacy 1s busy-loop dirty check via setInterval, and exposes
  fetchAll/saveAll. saveDisabled flips off as soon as the user diverges
  from the server snapshot.
- restartPanel rebuilds the URL (host/port/scheme/base path) from the
  saved settings so users land on the new endpoint after a port or
  cert change.
- models/setting.js: adopts the @/utils alias and a leading file-level
  doc — semantics unchanged.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5d-ii — settings General tab

Ports the panel/general partial (the largest single tab) — six
collapse panels: General, Notifications, Certificates, External
traffic webhook, Date and time, LDAP.

- GeneralTab.vue receives the reactive AllSetting via props and binds
  fields directly with v-model:value; SettingsPage stays the sole
  fetch/save owner.
- remarkModel/remarkSeparator surfaced as computed v-models that
  read+write the underlying single-string field (legacy stores them
  packed as <separator><orderedKeys>, e.g. "-ieo").
- LDAP inbound-tags select binds to a CSV ↔ array computed; inbound
  options come from /panel/api/inbounds/list on mount.
- Language select stays cookie-based via LanguageManager and reloads
  on change — same UX as legacy.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5d-iii — settings Security tab + 2FA modal

Ports the panel/security partial: change-credentials form and 2FA
toggle. The 2FA modal is a new shared component since enabling 2FA,
disabling 2FA, and changing credentials all funnel through it with
slightly different copy.

- TwoFactorModal.vue: 'set' flow renders a QR code + manual key + a
  6-digit verifier; 'confirm' flow renders just the verifier. The
  parent passes a confirm(success) callback that fires only when the
  entered code matches the live TOTP value (otpauth lib).
- SecurityTab.vue: holds the local user form (oldUsername/oldPassword/
  new*), POSTs /panel/setting/updateUser, and on success force-redirects
  to logout. When 2FA is on, the credentials change goes through the
  confirm-modal first.
- toggleTwoFactor leaves the switch read-only (the v-bound :checked
  matches AllSetting) and only flips after the modal succeeds, so
  cancelling out leaves state unchanged.
- Adds otpauth ^9.5.1 dep (qrious was already present).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5d-iv — settings Telegram tab

Ports the panel/telegram partial: bot enable/token/chatId/lang in the
General panel, schedule/backup/login/CPU-threshold in Notifications,
and proxy/API-server overrides in the third panel. All bindings live
on the shared AllSetting reactive — no fetch/save logic in this tab.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5d-v — settings Subscription general tab

Ports the subscription/general partial — four collapse panels covering
the master enable switches, presentation/template fields, certs, and
update interval.

- Sub path goes through a strip-on-input + normalize-on-blur computed:
  legacy stripped `:` and `*` and ensured the value starts and ends
  with a single `/` — same here.
- Both `subEnableRouting` and the announce/profile/title/support URLs
  are bound directly on AllSetting.
- The "Subscription URI override" placeholder mirrors the legacy
  pattern for the manual full-URL form.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5d-vi — settings Subscription formats tab

Ports the subscription/json partial — paths/URIs for the JSON and
Clash formats plus the four packed-JSON sub-fields: fragment, noises,
mux, and direct routing rules.

- subJsonFragment / subJsonMux / subJsonNoises / subJsonRules are each
  a JSON string on the wire; the tab exposes their fields as computed
  v-models that read+write the underlying JSON. Toggling a top-level
  switch off resets the field to "" (matches legacy semantics).
- Direct routing rules surface the IP and domain entries of the seed
  rule array as multi-select tag inputs; setting/removing tags
  edits the rules array in place rather than rebuilding it from
  scratch, so manually-added rules are preserved.
- Tab is gated on subJsonEnable || subClashEnable in the parent (only
  rendered when the user actually opted into one of those formats).

This closes Phase 5d — full settings page parity with the legacy panel
across all five tabs.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): route /panel/<route> to migrated pages in dev

The sidebar links to production-style URLs like /panel/settings, but
in dev that gets proxied to the legacy Go template — which fails
because we haven't loaded the legacy asset chain. Add a proxy bypass
so /panel and /panel/settings are served from index.html / settings.html
on the Vite dev server itself. Unmigrated routes (inbounds, xray)
still proxy to Go.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(csrf): expose token endpoint for SPA pages and fetch it from axios

The legacy panel pages got their CSRF token from a <meta name="csrf-token">
tag rendered by Go. SPA pages built by Vite don't have that, so every
unsafe (POST/PUT/DELETE) request from them was hitting CSRFMiddleware
with no token and getting 403 — visible as the settings page being
stuck on "Loading…" because POST /panel/setting/all failed.

- web/controller/xui.go: GET /panel/csrf-token returns the session
  token. Lives under the xui group so checkLogin still gates it; the
  CSRFMiddleware on the same group is a no-op for GET.
- frontend/src/api/axios-init.js: cache the token at module scope and
  lazy-fetch it when a non-safe request needs one. Seed from the meta
  tag first when present (legacy compat). On a 403 response, drop the
  cache and retry once — handles the case where a server restart
  rotated the token after the SPA loaded.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): keep sidebar links absolute when basePath is empty

The dashboard sidebar built tab keys as basePath + 'panel/...'. In dev
the window-injected basePath is '' so the resulting key was a relative
path like 'panel/settings'. When the browser resolved that against the
current /panel/settings URL it produced /panel/panel/settings — visible
as broken navigation between Dashboard and Settings.

Force a leading slash so the keys are always absolute regardless of
whether the host injected a basePath.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5f-i — inbounds page shell + list fetch

Adds the inbounds entry as a fourth Vite multi-page input and wires
/panel/inbounds through the dev proxy bypass. Lays down the page
chrome (sidebar, summary statistics card, refresh button) and the
fetch lifecycle composable so 5f-ii onward can drop in the table
columns and the modals without re-implementing it.

- inbounds.html + src/inbounds.js: fourth Vite entry; mounts InboundsPage.
- InboundsPage.vue: sidebar + summary card (totals over up/down,
  all-time, inbound count, client tags) + a basic table with enable/
  remark/port/protocol/traffic/expiry columns. Row actions, popovers,
  search/filter, auto-refresh, and the WebSocket delta path are all
  deferred to subsequent 5f subphases.
- useInbounds.js composable: GET /panel/api/inbounds/list +
  POST /panel/api/inbounds/onlines + POST /panel/api/inbounds/lastOnline +
  POST /panel/setting/defaultSettings, then computes the
  per-inbound clientCount roll-ups (active/deactive/depleted/expiring/
  online/comments) the table popovers consume.
- models/dbinbound.js + models/inbound.js: switched the legacy-utils
  import to the @/utils alias for consistency with the rest of the
  app. Semantics unchanged.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5f-ii — inbound list table + search/filter + auto-refresh

Fleshes out the inbound list with the full column set, search & filter
toolbar, row enable toggle wired to /panel/api/inbounds/setEnable/:id,
and a per-row action dropdown that emits events the parent will route
to modals as those land in 5f-iii through 5f-vii.

- InboundList.vue (new): toolbar (Add inbound + General actions
  dropdown + Refresh + auto-refresh popover), search-or-filter switch
  with the legacy radio buttons (Active/Disabled/Depleted/Depleting/
  Online), and a a-table with desktop and mobile column variants.
  Cells use AD-Vue 4's #bodyCell slot — protocol/clients/traffic/
  allTime/expiry/info cells render the same popovers and tags as
  legacy. Row enable switch is optimistic with rollback on POST
  failure.
- visibleInbounds computed mirrors the legacy search and filter
  projection: deep search through dbInbound + clients, or filter
  reduces inbound.settings.clients to the selected bucket so the
  table only shows matching client rows.
- Auto-refresh interval is read/written to localStorage with the
  same keys (`isRefreshEnabled`, `refreshInterval`) as the legacy
  panel. WebSocket delta updates are still deferred.
- Action menu emits event payloads {key, dbInbound}; the parent
  currently shows a "coming in later 5f subphase" toast for each.
  Modals (edit/qr/clone/delete/reset/info/clients) land in
  5f-iii through 5f-vii.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(inbounds): wrap popover-table rows in <tbody>

Vue's template compiler warned that <tr> can't be a direct child of
<table> per the HTML spec; the browser silently inserts a <tbody>
wrapper but Vue's SSR/hydration path doesn't, which can cause
hydration mismatches. Add explicit <tbody> in both popover tables
(traffic + mobile-info).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5f-iii — inbound add/edit modal + delete/clone/reset

Wires up the inbound CRUD flows. The protocol-specific and transport-
specific forms are still ahead in 5f-iii-b — for now the modal exposes
those as JSON textareas so users can both edit existing inbounds without
losing settings and create new ones from default templates.

- InboundFormModal.vue: tabbed modal with a full Basics tab (enable,
  remark, protocol, listen, port, total GB, traffic reset, expiry
  date) and three JSON-edit tabs (Settings, Stream, Sniffing). Add
  mode stamps a fresh template per protocol via
  Inbound.Settings.getSettings(protocol); changing the protocol in
  add mode restamps the JSON. Edit mode pretty-prints the existing
  JSON so the user sees the same fields they save back.
- POST /panel/api/inbounds/add or /panel/api/inbounds/update/:id on
  submit; on success the parent refreshes the list and the modal
  closes. Malformed JSON in any of the three textareas surfaces a
  message.error and aborts the save without losing user input.
- InboundsPage.vue: wires the row action menu to real handlers —
  edit (opens the modal in edit mode), delete, reset-traffic,
  clone, reset-clients, del-depleted-clients all go through
  Modal.confirm and refresh on success. General actions menu wires
  reset-inbounds / reset-clients / del-depleted-clients the same way.
  Remaining actions (qrcode/info/import/export/copyClients) still
  toast as "coming soon" — those land in 5f-iv and 5f-v.
- Adds dayjs ^1.11.20 dep for the a-date-picker v-model interop.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5f-iv — client add/edit + bulk-add modals

Wires per-inbound client management. Both flows go through the same
addClient/updateClient endpoints as legacy; the modals just funnel
the form state into the right shape (`{id, settings: '{"clients": [...]}'}`).

- ClientFormModal.vue: protocol-aware single-client editor — email/
  password/id/auth/security/flow/subId/tgId/comment/ipLimit/totalGB/
  expiry/renewal fields are shown/hidden per protocol like legacy.
  Edit mode displays the per-client traffic stats with a reset
  button; IP-limit log is read on click and clearable. Random
  helpers (sync icon next to each label) regenerate UUID/email/
  password/sub-id values.
- ClientBulkModal.vue: 1–500 clients in one POST, with the legacy
  five email-generation modes (Random / +Prefix / +Num / +Postfix /
  Pure-Prefix-Num-Postfix). Builds clients via the protocol-aware
  factory and concatenates their toString() output into a single
  settings.clients JSON array.
- InboundsPage.vue: opens both modals from the row action menu
  (`addClient` / `addBulkClient`). They both refresh the inbound list
  on success.
- Outstanding row actions still toast as "coming soon": qrcode,
  showInfo, copyClients, clipboard. Those land in 5f-v / 5f-vi.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5f-v — inbound info + QR-code modals

Wires the row "info" and "qrcode" actions and ports the legacy
inbound_info_modal end-to-end. The info modal handles every protocol
the legacy panel did:
  • multi-user (VMess/VLess/Trojan/SS-multi/Hysteria) — per-client
    table + share links + per-link QR;
  • SS single-user — share link + QR;
  • WireGuard — full peer table with downloadable peer-N.conf and a
    wg:// share link per peer;
  • Mixed/HTTP/Tunnel — connection-detail tables.

- QrPanel.vue: shared link card (header tag, copy button, optional
  download button, optional QR canvas, monospace footer with the
  raw value). Per-instance QRious instances are repainted on
  value/size change.
- InboundInfoModal.vue: full info modal. Subscription URL block keys
  off subSettings.subURI/subJsonURI; IP-log lazy-loads on open and
  surfaces refresh + clear; tg-id, last-online, depleted/enabled tags
  all match legacy.
- QrCodeModal.vue: lighter modal used for the row "qrcode" action on
  SS-single and WireGuard inbounds (just the QRs, no info table).
- InboundsPage.vue: wires both flows. checkFallback() reproduces the
  legacy logic — when an inbound listens on a unix-socket fallback
  (`@<name>`), the link generator is pointed at the root inbound that
  owns the listen address so QRs/links carry the public host:port +
  the right TLS state. Multi-client navigation (focusing a specific
  client's links) is deferred to 5f-vi where the per-inbound expand-
  row table will pass the email through.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5f-vi — per-inbound client expand-row table

Each multi-user inbound row in the list now expands to show its
client roster, mirroring the legacy aClientTable component.

- ClientRowTable.vue: inner a-table with full desktop column set
  (action icons / enable / online / client-with-status-dot / traffic
  with progress bar / all-time / expiry with reset cycle) and a
  collapsed mobile variant (single dropdown menu + popover info).
  Self-contained: stats are looked up via a per-inbound email->stats
  Map; per-client confirms (reset/delete) live on the row.
- The component emits typed events (edit/qrcode/info/reset-traffic/
  delete/toggle-enable) — InboundsPage routes them back to the
  existing client and info modals (with `findClientIndex` so the
  modal opens focused on the right client).
- InboundList.vue: hooks ClientRowTable into the a-table's
  expandedRowRender slot; row-class-name `hide-expand-icon` and a
  scoped CSS rule hide the chevron for non-multi-user inbounds
  (HTTP/Mixed/Tunnel/WireGuard/SS-single) so they keep looking flat.
- toggle-enable-client routes through updateClient with the same
  `{id, settings: '{"clients": [...]}'}` shape as the other modals,
  so backend parsing stays single-pathed.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5f-iii-b — replace inbound modal JSON textareas with structured forms

Rewrites InboundFormModal to look like the legacy panel: structured
forms for the common case, with a compact "Advanced (JSON)" fallback
for the rare bits we don't yet have UI for.

Tabs:
  • Basics — enable/remark/protocol/listen/port/total/trafficReset/expiry
  • Protocol — protocol-aware:
      VMess/VLess/Trojan/SS-multi/Hysteria in add mode embed an inline
        first-client form (email + ID/password/auth, security, flow,
        subId, comment, total GB, expiry);
      edit mode shows a clients-count summary table;
      VLess: decryption/encryption inputs;
      SS: method dropdown that re-randomizes password and propagates
        method change to the multi-user array (matches legacy
        SSMethodChange);
      HTTP/Mixed: accounts table with add/remove rows + Mixed
        auth/udp/ip toggles;
      Tunnel: address/port/network/followRedirect;
      WireGuard: secretKey/pubKey (regen via Wireguard.generateKeypair)
        + per-peer fields with PSK regen + allowedIPs add/remove +
        keepAlive.
  • Stream — only when canEnableStream(); transport selector with
      structured forms for TCP (proxy-protocol, http camouflage),
      WS (host/path/heartbeat/headers), gRPC (serviceName, multiMode),
      HTTPUpgrade (host/path). KCP/XHTTP fall back to the Advanced tab
      with an alert banner. Security selector with TLS (sni/alpn/
      fingerprint) and Reality (target/serverNames/keypair-gen via
      /panel/api/server/getNewX25519Cert / shortIds / fingerprint).
  • Sniffing — enabled/destOverride/metadataOnly/routeOnly/
      ipsExcluded/domainsExcluded as structured fields.
  • Advanced (JSON) — raw streamSettings + sniffing JSON for users
      reaching KCP/XHTTP/sockopt/finalmask/full TLS cert arrays. The
      stream JSON is auto-synced from the live model whenever the
      structured fields change.

State source of truth is a deeply-reactive Inbound + DBInbound pair
cloned on open; submit serializes via inbound.settings.toString() +
inbound.stream.toString() so the wire shape matches the legacy panel
byte-for-byte. streamNetworkChange semantics (clear flow when
TLS/Reality unavailable, reset finalmask.udp when not KCP) are
preserved.

Vision Seed for VLess + finer-grained TCP HTTP camouflage + the full
TLS cert/ECH editor will land in 5f-iii-c.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 5f-vii — shared text/prompt modals + remaining export/import wiring

Wires up the last batch of inbound row + general actions that were
toasting "coming soon": export-inbound-links, export-subs (per-inbound
and global), export-all-links, import-inbound, and the clipboard JSON
peek. Two small shared components back them — both can be reused by
the xray page later.

- TextModal.vue (shared): read-only multi-line viewer with a copy
  button and an optional download button when fileName is set.
  Replaces the legacy txtModal which the inbounds page used for every
  link export.
- PromptModal.vue (shared): generic title + input/textarea + confirm
  callback, with the legacy keybindings (Enter submits in single-line
  mode; Ctrl+S submits in textarea mode). Used here for import-inbound
  but also by xray-config edits in Phase 6.
- InboundsPage.vue: drops the toast stubs for `import`/`export`/`subs`
  on the general-actions menu and `export`/`subs`/`clipboard` on the
  per-row menu, routing each through openText / openPrompt + the
  appropriate model helper (genInboundLinks, etc.). The copyClients
  cross-inbound modal stays toast-stubbed — that's its own dedicated
  legacy modal worth its own commit.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 6-i — xray page scaffold + Advanced JSON tab

The fifth and last legacy page comes online. Tabs are scaffolded with
a-empty placeholders for the structured editors (Basics / Routing /
Outbounds / Balancers / DNS) so navigation is stable; the
Advanced (JSON) tab is fully functional and lets power users edit
the raw xraySetting tree exactly like the legacy CodeMirror pane.

- xray.html + src/xray.js: fifth Vite multi-page entry, mounted as
  XrayPage; vite.config.js routes /panel/xray and /panel/xray/ to it
  through the dev proxy bypass alongside the other pages.
- XrayPage.vue: page chrome with the Save / Restart-xray bar, restart-
  output popover (surfaces /panel/xray/getXrayResult content when
  startup fails), 6 a-tabs, and a textarea-backed Advanced JSON editor.
  CodeMirror is intentionally not pulled in — the textarea works for
  every modern browser and keeps the bundle slim while structured
  editors land in 6-ii through 6-v.
- useXraySetting.js composable: POST /panel/xray/ on mount, mirrors
  the settings-page 1s busy-loop dirty check for both xraySetting
  and outboundTestUrl, and exposes saveAll + restartXray. The dirty
  flag relies on string equality of the pretty-printed JSON, so
  reformat-only edits don't enable Save.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 6-ii — xray Basics tab structured editor

Replaces the placeholder on the Basics tab with a structured form for
the most-touched fields of the xray template — outbound + routing
strategy, log levels, traffic stat counters, and the "basic routing"
shortcuts (block torrent / IPs / domains, direct IPs / domains, IPv4
forced, WARP / NordVPN routing).

- useXraySetting.js: hoists a parsed `templateSettings` reactive
  alongside the JSON string, with two cooperating watches that keep
  them in sync. Editing structured fields stringifies into xraySetting
  for the dirty-poll + Advanced JSON tab; editing the JSON re-parses
  into templateSettings only when valid, so structured tabs stay
  readable mid-edit.
- BasicsTab.vue: collapse panels mirror the legacy partial — General,
  Statistics, Logs, Basic routing. Every input is a computed v-model
  reading/writing into templateSettings; the routing-rule shortcuts
  funnel through ruleGetter/ruleSetter which match the legacy
  templateRuleGetter/templateRuleSetter behavior (replace-first,
  drop-duplicates, pop-the-rule-when-empty). Direct/IPv4 setters
  also call syncOutbound() to provision/prune the matching outbound.
- XrayPage.vue: imports BasicsTab + derives `warpExist`/`nordExist`
  from the parsed templateSettings. WARP/NordVPN provisioning modals
  are still placeholders that toast — those land in 6-v with the
  routing/outbound editors.

Default tab flips back to Basics so users land on the structured
editor.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 6-iii — xray Routing tab + rule modal

Replaces the Routing tab placeholder with a full editor for
templateSettings.routing.rules:

- RoutingTab.vue: a-table over the parsed rules with the legacy six-
  column layout (action / source / network / destination / inbound /
  outbound) and the same "lead value + N more" pill renderer for
  multi-value criteria. Mobile drops source/network/destination for
  readability. Per-row dropdown handles edit / move-up / move-down /
  delete; the array-mutation reordering replaces the legacy jQuery
  Sortable drag handle without pulling in a sortable lib.
- RuleFormModal.vue: full form mirroring xray_rule_modal.html —
  CSV inputs for sourceIP/sourcePort/vlessRoute/ip/domain/user/port,
  Network select, Protocol multi-select, Attrs key/value pairs,
  inbound-tag multi-select sourced from
  templateSettings.inbounds + parent inboundTags + dnsTag,
  outbound-tag single-select sourced from templateSettings.outbounds
  + clientReverseTags, and balancerTag from
  templateSettings.routing.balancers. Submit serializes via the
  same shape the legacy `getResult` produces (CSV → array, drop
  empty fields).
- XrayPage.vue: imports RoutingTab and exposes inboundTags +
  clientReverseTags from useXraySetting so the modal can populate
  its tag pools.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 6-iv — xray Outbounds tab + outbound modal

Replaces the Outbounds tab placeholder with a full table + add/edit
flow. The 1.3k-line legacy outbound modal is condensed to a tabbed
modal with structured Basics fields (tag/protocol/sendThrough/domain
strategy) and JSON tabs for the protocol-specific settings + stream
trees — same approach the Inbound modal uses, and a power user can
still edit the same trees via the page-level Advanced (JSON) tab.

- useXraySetting.js: adds fetchOutboundsTraffic +
  resetOutboundsTraffic + testOutbound. Test states are tracked per
  outbound index so the row's Test button can show loading + the
  Test-result column can render the response delay / status / error.
- OutboundsTab.vue: full table (action / identity / address / traffic
  / test result / test) plus a card-list mobile variant with the
  same row dropdown (set-first / edit / move up/down / reset traffic
  / delete). outboundAddresses() reproduces the legacy
  findOutboundAddress logic so each protocol's host:port list is
  rendered consistently. Add/edit go through OutboundFormModal,
  delete goes through Modal.confirm, reset traffic posts to
  /panel/xray/resetOutboundsTraffic with the row's tag (or
  "-alltags-" from the toolbar).
- OutboundFormModal.vue: tag/protocol/sendThrough/domainStrategy on
  the Basics tab; settings + streamSettings as raw JSON on their
  respective tabs. Tag-collision check happens client-side before
  emitting; malformed JSON aborts the save with a message.error.
- XrayPage.vue: imports OutboundsTab and wires the test action to
  the composable's testOutbound helper.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 6-v — xray Balancers tab + DNS placeholder

Brings Balancers to full parity with the legacy panel and adds a
DNS tab placeholder that exposes the full dns/fakedns trees as JSON
so users can edit them without falling through to Advanced.

- BalancerFormModal.vue: tag (with duplicate-tag warning across
  other balancers), strategy (random/roundRobin/leastLoad/leastPing),
  selector tag-mode multi-select sourced from existing outbound
  tags + free-form additions, fallback. Disable-on-invalid is
  driven by the duplicateTag + emptySelector computed flags.
- BalancersTab.vue: empty state with a single "Add balancer" CTA;
  populated state shows the legacy 4-column table (action / tag /
  strategy / selector / fallback) with per-row edit + delete in a
  dropdown. On submit the wire shape preserves the
  `strategy: { type }` nesting only when the strategy is non-default,
  matching the legacy emit. Tag renames also chase across
  routing.rules.balancerTag references so existing rules don't dangle.
- DnsTab.vue: master enable switch + raw JSON for `dns` and
  `fakedns`. Legacy had a dedicated server-by-server editor + a
  fakedns row editor; both are big enough to deserve their own
  commits, and the JSON path supports every field today.

WARP / NordVPN provisioning modals still toast as "coming soon" —
those are third-party API integrations worth their own commits.
The xray page now has structured editors for Basics / Routing /
Outbounds / Balancers and JSON editors for DNS / Advanced — every
xray tab the legacy panel offered is functional.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(server): Phase 8 — cut HTML routes over to web/dist/

Production cutover. Every user-facing HTML route now serves the
Vue-3-built bundle from web/dist/ instead of rendering the legacy
Go template; the long-hashed Vite assets are served at /assets/ from
the same embedded filesystem. The legacy templates in web/html/ and
the legacy static tree in web/assets/ are kept on disk for now in
case a quick revert is needed, but nothing the binary serves
references them.

What changed:
- web.go: a new //go:embed dist/* feeds the controller package via
  a SetDistFS hand-off before controller construction. The static
  /assets/ route is rebound: in dev to web/dist/assets/ on disk so
  Vite's incremental rebuilds show up live; in prod to the embedded
  dist via wrapDistFS (rooted one level deeper than wrapAssetsFS).
- controller/dist.go: serveDistPage helper used by every HTML
  handler. Reads dist/<name> from the embedded FS and applies two
  transforms before sending:
    1. injects <script>window.__X_UI_BASE_PATH__="..."</script>
       just before </head> so AppSidebar links resolve under the
       panel's basePath.
    2. when basePath != "/", rewrites Vite's absolute /assets/ URLs
       to <basePath>assets/ so installs running under a custom URL
       prefix load the bundle where the static handler lives.
  HTML responses go out with no-cache so panel upgrades reach
  users on the next refresh; hashed JS/CSS stays cacheable.
- controller/index.go: IndexController.index now serves
  dist/login.html for logged-out callers (the redirect for logged-in
  users is unchanged).
- controller/xui.go: XUIController.{index,inbounds,settings,xraySettings}
  each become a one-line wrapper around serveDistPage.

Smoke checklist for the maintainer:
- run `cd frontend && npm run build` to refresh web/dist/ before
  building the Go binary (the embed snapshot is taken at compile
  time);
- visit /panel/, /panel/inbounds, /panel/settings, /panel/xray and
  confirm each loads its Vue page;
- log out and log back in to verify the login flow;
- confirm the sidebar links navigate correctly under your install's
  basePath;
- POST flows (e.g. saving settings) still need the CSRF token —
  that endpoint (/panel/csrf-token, added earlier) is unchanged.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 6-vi — WARP + NordVPN provisioning modals

Replaces the toast stubs on the Basics tab and Outbounds toolbar
with the legacy WARP + NordVPN provisioning flows. Both modals now
stage their wireguard outbounds back into templateSettings.outbounds
through the same event channels OutboundsTab uses, so the existing
add / reset / delete / refresh-traffic surface keeps working.

- WarpModal.vue: empty state shows a single Create button that
  generates a wireguard keypair locally (Wireguard.generateKeypair)
  and posts it to /panel/xray/warp/reg; populated state surfaces
  the access_token / device_id / license_key / private_key, lets
  the user upgrade to WARP+ via /panel/xray/warp/license, refreshes
  the account info from /panel/xray/warp/config (plan / quota /
  usage in human-readable bytes), and stages a wireguard outbound
  with the WARP-specific reserved-byte encoding pulled from
  client_id. Add / Reset / Delete go through events the parent
  routes back to templateSettings.outbounds.
- NordModal.vue: dual-tab login (NordVPN access token →
  /panel/xray/nord/reg, or paste a NordLynx private key →
  /panel/xray/nord/setKey). Once authenticated, country / city /
  server selectors fetch from /panel/xray/nord/{countries,servers},
  servers sort by load ascending, the lowest-load server in the
  current city auto-selects. Reset emits oldTag/newTag so the
  parent renames matching routing rules in place; logout emits a
  remove-routing-rules event with prefix `nord-` to purge any
  dangling references.
- XrayPage.vue: holds warpOpen / nordOpen flags, ensures the
  outbounds array exists before mutating it, and wires the modal
  events (add-outbound / reset-outbound / remove-outbound /
  remove-routing-rules) to in-place edits of templateSettings.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): Phase 7 — vue-i18n wired up + login page translated

Sets up vue-i18n on top of the panel's existing TOML translation
files. The Go side stays the source of truth — translators continue
to edit web/translation/*.toml; a sync script snapshots those files
into per-locale JSON the Vue bundle imports. The login page is
translated end-to-end as a worked example; remaining pages can be
converted incrementally without infrastructure churn.

What's in the box:
- scripts/sync-locales.mjs: small TOML→JSON converter that walks
  web/translation/*.toml and writes frontend/src/locales/<code>.json.
  Handles the narrow subset of TOML the panel uses (flat key/value
  pairs + dotted [section.subsection] heads). Wired as a `prebuild`
  + `predev` script so production builds always include the latest
  strings without a manual step.
- src/i18n/index.js: createI18n() in composition mode with all 13
  locales emitted as their own Vite chunks. The active locale (read
  from the same `lang` cookie LanguageManager has always managed)
  plus the en-US fallback are eagerly loaded; the rest are
  dynamically importable via a loadLocale(code) helper. This keeps
  the per-page bundle the user actually downloads small — only ~30
  KB of strings end up in the initial payload, vs ~220 KB if all
  13 were eager.
- All five page entries (index/login/settings/inbounds/xray) wire
  the i18n plugin into createApp via .use(i18n).
- LoginPage.vue: t(...) replaces hardcoded English on the username
  / password / 2FA placeholders, the submit button label, and the
  Settings popover title. The Hello/Welcome headline cycle stays
  hardcoded — those are stylistic, not labels.

The 'Hello'/'Welcome' cycle stays in English deliberately; the rest
of the migration's components still ship hardcoded English and will
be converted page by page in follow-up commits.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* i18n(frontend): translate page chrome — sidebar, save bars, tabs, summary cards

Replaces hardcoded English with t() calls in the components every
user sees on every page load. The translations themselves come from
the existing TOML files via the sync script — no new strings, no
new locale keys.

Per component:
- AppSidebar.vue: 5 menu titles (dashboard / inbounds / settings /
  xray / logout). Computed so the sidebar re-renders when the
  cookie-driven locale flips on reload.
- IndexPage.vue: Quick actions card title + Logs / Backup / Up-to-
  date / Update buttons.
- StatusCard.vue: CPU / Memory / Swap / Storage labels +
  logical-processors / frequency tooltips.
- XrayStatusCard.vue: card title + error popover header + Stop /
  Restart / Switch xray action labels (kept the v-prefix version
  string as-is — it's content, not a label).
- SettingsPage.vue: 5 tab titles + Save / Restart-panel buttons +
  unsaved-changes warning.
- XrayPage.vue: 6 tab titles + Save / Restart-xray buttons +
  unsaved-changes warning.
- InboundsPage.vue: 5 summary-stat card titles.
- InboundList.vue: 10 column titles (computed for live locale),
  Add inbound / General actions buttons + every dropdown menu item,
  search placeholder, filter radio labels, popover titles
  (disabled / depleted / depleting / online), traffic + info
  popover row labels.

Total: ~75 strings localised across 8 files. The remaining English
labels live in the per-tab settings forms, the form modals
(Inbound / Client / Outbound / Rule / Balancer / WARP / Nord), and
the per-row table cell helpers — all incremental work that doesn't
touch infrastructure.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* i18n(frontend): translate every remaining English string on the index page

Closes the index page's i18n coverage. Combined with the page-chrome
commit, every label users see on the dashboard is now sourced from
the TOML translation files.

Per file:
- IndexPage.vue: loading-spinner tip (initial + dynamic).
- BackupModal.vue: modal title, both list-item titles + descriptions
  ("Back up" / "Restore"), in-flight busy tips ("Importing database…"
  / "Restarting panel…").
- PanelUpdateModal.vue: modal title, update-available alert,
  current/latest version row labels, "Up to date" tag + label,
  primary action button. Modal.confirm now uses the translated
  panelUpdateDialog / panelUpdateDialogDesc with #version#
  substitution; success toast uses panelUpdateStartedPopover.
- LogModal.vue: title slot ("Logs"). The Debug/Info/Notice/Warning/
  Error log-level options stay literal — they're xray's wire values,
  not user-facing labels (matches the existing settings-page choice).
- XrayLogModal.vue: title + Filter label. Direct/Blocked/Proxy stay
  literal for the same reason.
- VersionModal.vue: modal title + xray-switch alert + per-file
  tooltip + "Update all" button + custom-geo collapse header. The
  Modal.confirm flows for switchXrayVersion + updateGeofile use
  translated dialog/desc with #version# / #filename# substitution.
- CpuHistoryModal.vue: title slot.
- CustomGeoSection.vue: routing-hint alert, Add / Update-all buttons,
  every column title (computed for live locale), copy/edit/download/
  delete tooltips, copy toast, delete-confirm modal, empty-state
  text.
- CustomGeoFormModal.vue: add/edit titles, OK/cancel labels, Type/
  Alias/URL field labels, alias placeholder, all three validation
  toasts.

Total: ~50 strings localised across 8 index-page files. The Hello /
Welcome login headline cycle and a handful of literal xray wire
values (Direct/Blocked/Proxy/log levels) are intentionally kept
hardcoded.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* i18n(frontend): Phase 7-c — translate settings, inbounds modals, xray tabs

Continues the page-by-page translation pass started in cb37dd55 — runs
every user-visible string on settings (General/Security/Telegram/Sub),
inbounds (Client/QR/Info modals), and xray (Routing/Balancer/Rule/Warp/
Nord/Basics/Outbounds tabs) through useI18n. Updates the TOML→JSON sync
script to escape `@` (vue-i18n parses it as a linked-format prefix) and
refreshes all 13 locale files.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): Phase 9 — restore index dashboard, fix login/CSRF, port legacy styles

- Index dashboard regains the 8 cards that were lost in the SPA port
  (3X-UI panel info, Operation Hours, System Load, Usage, Overall Speed,
  Total Data, IP Addresses, Connection Stats), plus a Config button that
  shows the live xray config.json. Version display falls back through
  panelUpdateInfo → window.__X_UI_CUR_VER__ → '?' so dev mode isn't blank.
- Xray config no longer hangs on load: useXraySetting surfaces failures
  instead of leaving a perpetual spinner, and the Vite dev proxy stops
  hijacking POST requests to migrated routes (only GETs get bypassed).
- Inbound page no longer throws __asyncLoader/emitsOptions errors —
  inbound.js was missing imports (NumberFormatter, SizeFormatter,
  Wireguard) and InboundList kept emitting after unmount.
- Login round-trip works after logout: a public /csrf-token endpoint
  bootstraps the SPA before authentication, axios caches the token
  module-level, and the dev 401 handler navigates to /login.html
  instead of reloading the dashboard into a redirect loop.
- legacy.css mirrors the legacy panel's surface/text variables so dark
  and ultra-dark themes match main; every SPA entry imports it.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): rebuild xray DNS section to match main branch

DnsTab now exposes every field the legacy panel did — top-level toggles
(tag, hosts, queryStrategy, disableCache/queryConcurrency, fallback
strategy, client subnet), the servers table with per-row strategy and
domain/expectIP/unexpectedIP overrides, and the Fake DNS pool. The new
DnsServerModal covers the full add/edit flow and collapses to a bare
string when the user only sets an address — matching the wire shape
the legacy form emits for plain DNS entries like "8.8.8.8".

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): rebuild xray outbound modal with structured per-protocol forms

Replaces the JSON textareas with the same shape the legacy panel uses:
all 11 outbound protocols (vmess/vless/trojan/shadowsocks/socks/http/
mixed/wireguard/tun/dns/loopback/blackhole/freedom) get dedicated
fields, every transport (TCP/KCP/WS/gRPC/HTTPUpgrade/XHTTP) gets its
own panel, and TLS/Reality/sockopt/Mux are configured through the same
controls as the inbound side. Brings the SPA outbound editor to parity
with main so users no longer have to drop into raw JSON.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): bring inbound modal to full parity with main branch

Switches the default protocol on add to VLESS, fixes a crash when adding
a Mixed account (the constructor is SocksAccount, not MixedAccount),
and fills in the fields the SPA was previously delegating to the
Advanced JSON tab:
- TLS: cipher suites, min/max version, reject SNI / disable system root /
  session resumption switches, the certificate array with per-row
  Path-or-Content toggle (Set Default pulls from /panel/setting/
  defaultSettings), One Time Loading, Usage / Build Chain, plus ECH
  key/config with a Get New ECH Cert button.
- Reality: xver, target/SNI sync icons (uses getRandomRealityTarget),
  max time diff, min/max client version, short IDs randomizer, SpiderX,
  mldsa65 seed/verify with Get New Seed.
- Stream: full structured forms for every transport — TCP HTTP
  camouflage gets its request/response editor, mKCP gets MTU/TTI/uplink/
  downlink/CWND/maxSendingWindow, WebSocket / gRPC (now with Authority) /
  HTTPUpgrade get headers + proxy-protocol toggles, XHTTP gets the
  full SplitHTTPConfig surface (mode-aware fields, padding obfs,
  session/sequence placement, uplink data, no-SSE).
- New External Proxy section and a structured Sockopt block (mark,
  TCP keepalive/timeout/clamp, fast open, MPTCP, penetrate, V6Only,
  domain strategy, congestion, TProxy, dialer/interface, trusted XFF).
- VLESS gets the legacy X25519 / ML-KEM-768 buttons that fetch fresh
  decryption/encryption blocks via /panel/api/server/getNewVlessEnc.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): add FinalMask UI (TCP/UDP masks + QUIC params) to inbound and outbound

Mirrors web/html/form/stream/stream_finalmask.html as a shared
FinalMaskForm component used by both modals — they share the same
StreamSettings shape (addTcpMask/addUdpMask/finalmask/enableQuicParams)
so a single template handles both. Surfaces:
- TCP masks for raw/tcp/httpupgrade/ws/grpc/xhttp networks: fragment,
  sudoku, and header-custom (with the 2D clients/servers groups, each
  row supporting array/str/hex/base64 packets and a randomize button
  for base64).
- UDP masks for hysteria protocol or kcp network: hysteria gets just
  salamander; kcp gets the full type list (mkcp variants, header-*,
  xdns/xicmp, header-custom with flat client/server lists, and noise).
  Switching to xdns shrinks the kcp MTU to 900 to match the legacy
  panel's behavior.
- QUIC Params for hysteria or xhttp: congestion (incl. brutal up/down
  fields), debug, UDP hop ports/interval, idle/keepalive timeouts,
  path-MTU discovery toggle, and the four receive-window tunables.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): remove duplicate Outbound test URL from xray Advanced tab

The Basics tab already exposes this field through BasicsTab —
duplicating it on the Advanced tab let two inputs race the same
ref and only added clutter.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): unify theming on vanilla AD-Vue light/dark/ultra-dark

The legacy panel CSS (custom.min.css ported as legacy.css) tinted every
non-primary button teal-green via .dark .ant-btn:not(.ant-btn-primary)
overrides while AD-Vue 4's darkAlgorithm kept primary buttons blue —
producing the mixed blue/green button look on dark mode. Drop legacy.css
entirely and let AD-Vue 4's algorithms own the palette.

Centralize antdThemeConfig in useTheme.js so every page resolves to the
same source of truth (light = defaultAlgorithm, dark = darkAlgorithm,
ultra-dark = darkAlgorithm + deeper colorBgBase/Layout/Container/
Elevated tokens). Each page's <a-config-provider> now imports the
shared computed instead of defining its own copy.

Drops the 67 KB legacy CSS chunk; per-page CSS bundles fall to ≤5.9 KB.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): restore computed import in Settings + Xray pages

When 5f1aba28 dropped the local antdThemeConfig computed (now shared
from useTheme), it also stripped `computed` from the import list — but
both pages still call computed() elsewhere (confAlerts, advanced-tab
helpers). Re-adds it.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): retheme dashboard gauges to AD-Vue blue and shrink them

- StatusCard's CPU/RAM/Swap/Storage dashboards rendered at AD-Vue's
  default 120px width which made the percent text balloon to ~36px.
  Drop to 90px (70px on mobile) so the gauge fits the rest of the card.
- The CurTotal.color thresholds still hardcoded the legacy teal/orange
  palette (#008771 / #f37b24 / #cf3c3c). Switch to AD-Vue's primary /
  warning / danger tokens (#1677ff / #faad14 / #ff4d4f) so the gauges
  match the rest of the panel under both light and dark themes.
- XrayStatusCard's running-animation badge ring also still pointed at
  the deleted --color-primary-100 var; hardcode the new primary blue.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* i18n: shorten backupTitle to "Backup & Restore" across all 13 locales

The backup modal header was the second-longest title in the dashboard
on every locale ("Database Backup & Restore" / "Резервне копіювання
та відновлення бази даних" / etc). Drop the "Database / Veritabanı /
数据库" qualifier — the modal already lives under the "Database"
column, so the shorter form reads cleaner on narrow viewports.

Updated both the .toml source-of-truth files and the synced .json
locales (re-running scripts/sync-locales.mjs).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* i18n: collapse two translation databases into a single web/translation/<lang>.json set

The Vue SPA had been reading from frontend/src/locales/*.json while the
Go binary still loaded web/translation/translate.*.toml — and a
sync-locales.mjs pre-build step kept the two in lockstep, with TOML as
the source of truth. Now that go-i18n v2.6.1 already flattens nested
JSON via recGetMessages/addChildMessages, both runtimes can share one
file per locale.

- Move the 13 nested-JSON locale files to web/translation/<lang>.json
  so they live alongside the Go //go:embed translation/* directive.
- Switch web/locale/locale.go from toml.Unmarshal to json.Unmarshal
  (and drop the pelletier/go-toml import — it's now indirect-only).
  Confirmed via a smoke test that pages.index.cpu, subscription.title,
  tgbot.commands.help, and menu.settings all resolve in en-US, fa-IR,
  ru-RU, and zh-CN.
- Repoint Vue's i18n loader at the new path (../../../web/translation/
  *.json glob) and drop the moved-here pathDelimiter comment that no
  longer applies.
- Delete the 13 legacy translate.*.toml files and the sync-locales.mjs
  script + its npm pre-script hooks (predev/prebuild/i18n:sync). The
  Telegram bot and subscription page still get their messages because
  they were reading the same MessageIDs the JSON files now produce.
- Update copilot-instructions.md so the next contributor knows where
  the canonical translation files live.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): redesign expand-row + retheme client visuals

When you expanded an inbound row, the nested <a-table> inside
ClientRowTable burst out of the parent's scroll-x box — its
.ant-spin-container ended up wider than the parent's narrow
.ant-table-cell, so the child looked oversized while the parent looked
squeezed. Replace the nested table with a CSS-grid layout that owns
its sizing, sits flush inside the expanded cell, and collapses to a
3-column layout on mobile (action menu, client identity, info popover).

While in there, fix three other client-row visuals:
- The Unicode infinity glyph (U+221E) renders as an "m"-shaped
  character in some system fonts (Windows Segoe UI in particular).
  Add a shared <InfinityIcon /> SVG component (legacy panel's path)
  and use it in ClientRowTable, InboundList, and InboundInfoModal —
  desktop and mobile cells.
- The "unlimited quota" traffic bar passed :percent="100" with no
  stroke-color, so AD-Vue auto-coloured it success-green. Pin it to
  the AD-Vue purple token (#722ed1) so it reads as the no-limit
  sentinel rather than another usage state.
- ColorUtils + the in-row statsExpColor still hardcoded the legacy
  teal/orange/red/purple palette (#008771 / #f37b24 / #cf3c3c /
  #7a316f). Map them onto AD-Vue 4's success/warning/danger/purple
  tokens (#52c41a / #faad14 / #ff4d4f / #722ed1) so badges, tags,
  and progress bars all match the rest of the panel.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): darken light-theme page bg so cards stand out

The light-theme --bg-page was #f0f2f5 — close enough to AD-Vue's #fff
card background that the cards faded into the page. Bump it to #e6e8ec
(a more visibly distinct gray) so cards lift cleanly off the surface.
Dark and ultra-dark stay where they were.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): shrink dashboard percent text and surface the unfinished arc

Two follow-up tweaks to the dashboard gauges:
- AD-Vue scales the percent text from the SVG, not from :width, so
  the 90px gauges still rendered the number at ~27px. Pin
  .ant-progress-text to 14px via :deep() and trim the gauge to 70px
  (60px on mobile) so the whole card stays compact.
- The default trail (rgba(0,0,0,0.06) / rgba(255,255,255,0.08)) was
  invisible on the light-theme card. Pass an explicit
  rgba(128,128,128,0.25) trail-color so the unfinished portion is
  visible under both light and dark themes.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): migrate subpage.html to Vue 3 SPA

The subscription info page was the last page still rendered by Go
templates. Move it to the Vite multi-page setup so the whole panel
loads through one toolchain.

Frontend: SubPage.vue mounts at /sub/<id>?html=1 and reads window.__SUB_PAGE_DATA__
for the parsed view-model (traffic / quota / expiry + rendered share
links). Fix descriptions borders against the light-theme card by
painting the row divider on each cell's bottom edge — AD-Vue's <tr>
border doesn't render reliably under border-collapse:collapse.

Backend: serveSubPage reads dist/subpage.html, injects
window.__X_UI_BASE_PATH__ + window.__SUB_PAGE_DATA__ before </head>,
and rewrites Vite's absolute /assets/ URLs when the panel runs under
a URL prefix. Drop the legacy template-FuncMap wiring and switch the
sub server's static mount from web/assets to web/dist/assets.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): inbound modal QR + tabs + restored TLS fallbacks

Per-client QR action: the qr icon on the expand-row table opened the
big info modal instead of the QR modal. Route it to QrCodeModal and
extend that modal with a `client` prop so genAllLinks() produces the
per-client share URLs (and per-peer remarks for WireGuard).

Inbound's Data redesign: split the dense single-page view into three
tabs — Inbound, Client, Subscription. Drop every QR rendering from
this modal (QrCodeModal is the QR home now). Each row in the Inbound
tab is one label/value pair instead of the legacy 2-column grid, and
long values like the VLESS encryption blob render as a wrapping code
block with a copy button so they can't blow out the dialog. The
Subscription tab renders sub URL + JSON URL as clickable anchors that
open in a new tab.

Restored TLS fallbacks UI: the model already exposed
VLESSSettings.Fallback / TrojanSettings.Fallback with addFallback /
delFallback / fallbackToJson, but the form modal never surfaced them
during the Vue 3 migration. Re-add the legacy form (SNI, ALPN, Path,
Destination, PROXY) on the protocol tab, gated on TCP transport plus
(for VLESS) encryption=none — same conditions as main.

Column widths: Protocol 70→130 and All-time Traffic 60→95 in the
inbound list; All-time Traffic 90→130 in the client expand-row, so
the header text fits and tags don't get squeezed.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): navy dark theme + rounded inbound/client corners

Dark theme picks up a refined navy palette (page #0a1426, cards
#142340, sider #0d1d33) so the sidebar blends with the rest of the
surface; ultra-dark stays neutral black. Resolves the previous mismatch
where AD-Vue 4 hardcoded #001529 / #002140 for the sider, trigger and
dark Menu items via Layout.colorBgHeader / colorBgTrigger and Menu's
colorItemBg — overrides go through the component-token map now.

Round the inbound table's outer corners (header start/end + last row
end) and wrap the client expand-row grid in a 1px / 8px-radius border
so the list reads as a contained block instead of a flush rectangle.

Linter-driven whitespace cleanup across inbounds/*.vue rolled into the
same commit since it can't be split out cleanly.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): xray tab fixes — modal close, tag validation, full XHTTP, reset to default

Modal close: BalancersTab / OutboundsTab / RoutingTab confirmDelete used
arrow expressions that returned splice's removed-items array. AD-Vue 4
treats truthy non-thenables from onOk as "still pending" and never closes
the dialog (see ActionButton.js:103-106), so the confirm modal stayed
open. Wrap the body so onOk returns undefined and AD-Vue auto-closes.

Tag validation: outbound + balancer modals only flipped between
warning/success on duplicate, leaving the empty case as a green ✓.
Split into a 3-state computed — error (empty) / warning (duplicate) /
success — and wire a help message so the input clearly explains why
the OK button is disabled.

Reset to default: re-add the legacy "Reset to Default" panel at the
bottom of BasicsTab. Calls /panel/setting/getDefaultJsonConfig and
overwrites templateSettings; the existing watch re-stringifies so the
JSON tab + dirty-poll see the new state.

Restored Basics option lists from main: IPs (4→10, +Vietnam/Spain/
Indonesia/Ukraine/Türkiye/Brazil), DomainsOptions (4→10, +regex
entries), BlockDomainsOptions (5→17, +Malware/Phishing/Adult/regex),
ServicesOptions (Reddit/Speedtest in, off-template Microsoft out).

Outbound form parity with main:
  • Reverse Sniffing UI for VLESS — toggle + destOverride checkboxes
    (HTTP/TLS/QUIC/FAKEDNS) + Metadata/Route Only + IPs/Domains
    excluded multi-selects, gated on reverseTag being set.
  • Full XHTTP transport — request headers list, Max Upload Size /
    Min Upload Interval (packet-up), Padding Obfs Mode + sub-fields,
    Uplink HTTP Method, Session/Sequence/UplinkData placement +
    keys, No gRPC Header (stream-up/stream-one), expanded XMUX with
    Max Concurrency/Connections/Reuse/Request/Reusable/Keep-alive.

Strip a-divider from the outbound form per request — replaced with
plain section/item heading divs so the labels and per-row delete
icons stay but the horizontal rule is gone.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): xray Advanced tab parity + finalmask gating

Advanced tab was a single textarea bound to the full xraySetting blob.
Restore the legacy 4-way view: a radio group toggles between All /
Inbounds / Outbounds / Routing Rules, and the textarea reads/writes
the matching slice through templateSettings. Added the legacy header
("Advanced Xray Configuration Template" + description) so the page
introduces itself like main.

Outbound finalmask leaked into protocols that don't have a stream
(Freedom / Blackhole / DNS / Socks / HTTP / Wireguard) because the
v-if only checked outbound.stream. Gate the whole FinalMaskForm on
outbound.canEnableStream() to match main.

Drop the leading divider inside FinalMaskForm — its parent already
provides separation, so the rule above "TCP Masks" was redundant.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): inbound Advanced tab live mirror + QR exact-fit sizing

Advanced tab in the inbound modal showed stale state. The watch only
refreshed advancedJson.stream, so toggling the Sniffing switch in the
Sniffing tab left the Advanced JSON showing the prior value. And
encryption — stored on inbound.settings.encryption, not on stream —
never appeared at all because Advanced only exposed stream + sniffing.

Split the watch into three (stream / sniffing / settings) and add a
settings textarea so encryption / clients / fallbacks live alongside
the existing two views. The submit() path now reads settings from
the JSON tab too (falling back to inbound.settings.toString()) so
power-user edits in Advanced override the structured form on save.

QR canvas: when a longer share-URL bumps the QR matrix size, QRious
falls back to floor(canvasSize / matrixWidth) and centers the pattern,
leaving a white margin (e.g. matrix=41, size=180 → 8px gap). Pre-pick
the QR version from the URL byte length and set canvas size to a
multiple of matrixWidth × pixelSize so the pattern always fills it
edge-to-edge — no white margin even after toggling encryption on.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): inbound stream tidy-up + QR sizing + dev proxy

Stream tab clean-up: drop the seven a-divider rules in the inbound
form's Stream tab — replace the labelled ones (Request / Response /
Security) with a section-heading div that matches the outbound modal,
delete the empty rules above TLS sub-blocks / External Proxy /
Sockopt. Empty header-list form-items also leaked margin space below
each "Add header" button across TCP / WS / HTTPUpgrade / XHTTP — gate
each on headers.length > 0 so they vanish until the user adds one.

QR panel: drop the link text under the canvas (the user already has
a copy button on the header). Pin the canvas display size to a fixed
240px square via :style + image-rendering: pixelated/crisp-edges so
a dense WireGuard config QR and its sparser link share the same
on-screen footprint without blurring.

Dev proxy: Node's AggregateError wraps connection failures whenever
DNS returns more than one address (::1 + 127.0.0.1) and the code
lands on the inner errors, not the outer. The existing handler only
checked err.code so the ECONNREFUSED stack still spammed the log
when the Go backend was down. Walk err.errors too, print one
friendly line ("backend not reachable — start the Go server"), then
stay quiet for the rest of the session.

Vendor splitting + chunk-size warning: split node_modules into
stable vendor-* chunks so each page only ships the deps it uses and
the browser caches them across versions. ant-design-vue stays as a
single chunk because its components share internals; raise the
chunk-size warning to 1500kB so the build stays quiet (its 1.4MB
minified gzips to ~410kB on the wire).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): info-modal cleanup + 2FA QR + outbound link import

- 2FA QR: matrix-snap canvas + opaque background to drop white margin
- Inbound info modal: stack Mixed/HTTP/Tunnel as info-rows, hide tab
  strip when only the Inbound tab applies
- Add inline VLESS Reverse tag input on first-client form
- Hide Protocol tab for TUN (no form yet)
- Outbound link converter: route through Outbound.fromLink so
  vless/trojan/ss/hysteria(2) imports work alongside vmess; fix stray
  implicit global in fromLink

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): jalali calendar + drop legacy moment-jalali

- Wire Calendar Type setting to a real Jalali datepicker via
  vue3-persian-datetime-picker, gated by useDatepicker composable
- DateTimePicker wrapper swaps between AD-Vue and Persian picker; keeps
  dayjs v-model contract so existing forms/setters work unchanged
- Theme picker popup explicitly per body.dark / data-theme=ultra-dark
  (AD-Vue 4 doesn't expose CSS vars, so var() fallbacks defaulted to
  white); fix invisible disabled days, SVG arrow fills, popup clipping
  via append-to="body"
- Replace stray moment() calls in dbinbound/inbound models with dayjs;
  the legacy global was undefined under ESM and broke the inbounds list
  whenever any inbound had expiryTime > 0
- Remove legacy moment-jalali / persian-datepicker / aPersianDatepicker
  assets — replaced by the Vue 3 picker

Note: dark/ultra background of the date popup still renders white in
some cases — pending follow-up.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): jalali popup theming + full-month layout

- Re-prefix popup selectors with .vpd-wrapper (popup root that travels
  with appendTo='body'), not .vpd-main (which stays at the input);
  paints the popup's dark/ultra background again
- Drop the 1px border on .vpd-content — with box-sizing: border-box
  it ate 2px from the day-row width, wrapping the 7th cell of every
  row and hiding days 18-31 of months that needed a 5th week

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat: render dates in Jalali when Calendar Type is jalalian

- IntlUtil.formatDate accepts an optional calendar arg; appends the
  BCP-47 -u-ca-persian extension so Intl renders Jalali across all UI
  languages, not just fa-IR
- Plumb the panel's datepicker setting into the SubPage via the Go
  injection (window.__SUB_PAGE_DATA__.datepicker)
- Panel pages (inbound list/info, client row, xray log) read the same
  setting through the useDatepicker composable so the whole panel
  stays consistent

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(frontend): ultra-dark page tint + mobile-friendly inbound view

- Drop --bg-page from #21242a (lighter than the cards) to #050505 in
  ultra-dark across index/sub/settings/inbounds/xray, so cards
  consistently elevate over the page
- Hide the inline sider's children + collapse-trigger and zero its
  width below 768px; the floating drawer-handle remains the menu
  trigger
- Inbounds page mobile pass: tighten content-area + card padding;
  flex-wrap the filter bar instead of stacking; shrink table cell
  padding so all 4 mobile columns fit; bump expand / action / info
  icon hit targets
- Per-client expand row on mobile: soft-tinted rounded cards instead
  of hairline borders, larger action / info touch targets, more
  legible email typography, bigger status badge dot

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* chore: remove legacy template + asset trees and dead Go template engine

- Delete web/html/ entirely (page templates, form/, modals/, component/,
  common/, settings/) — every route is served from web/dist/ now via
  serveDistPage; nothing in the binary referenced these
- Delete web/assets/ entirely (jQuery-era ant-design-vue, axios, moment,
  codemirror, qrcode/qs/uri/vue/otpauth, custom CSS, Vazirmatn font);
  Vite bundles all of this into web/dist/assets
- Drop the Gin HTML template wiring: remove //go:embed assets +
  //go:embed html/*, the assetsFS/htmlFS vars, the wrapAssetsFS adapter,
  EmbeddedHTML / EmbeddedAssets exports, getHtmlFiles / getHtmlTemplate,
  the i18nWebFunc/funcMap and SetFuncMap call, and the dev/prod
  template-engine branch — only StaticFS for /assets/ is needed now
- Remove dead html()/getContext() helpers and unused imports from
  web/controller/util.go (no c.HTML(...) callers remain)

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(frontend): inbound expand chevron position + cpu history layout

- Push the inbound table's expand chevron away from the left edge with
  margin-inline + cell padding so it isn't flush against the corner
- Move "Timeframe: …" caption above the chart (was below); restore
  the line that the previous edit removed
- Fix x-axis time labels being clipped at the bottom of the cpu chart
  — the offset (paddingTop+drawHeight+22 = 222) exceeded the SVG
  viewBox height (220); dropped to +14 so labels sit at y=214 with
  room for descenders
- Move the SVG axis text colors out of <style scoped> into a global
  block — Vue's scoped CSS doesn't always hash-attribute SVG <text>
  descendants, so the dark-mode overrides via :global() weren't
  matching; bumped opacity 0.55 → 0.85 for legibility on navy/black

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(login): language picker in settings popover + fluid card sizing

- Add language select alongside the theme switch (mirrors SubPage)
- Bind headline to pages.login.hello / pages.login.title so the
  "Hello / Welcome" cycle re-translates with the active locale
- Replace AD-Vue 5-breakpoint grid with clamp() sizing so the card
  scales smoothly instead of jumping ~33% at each breakpoint
- Pin horizontal padding so input width stays stable on large viewports

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* refactor(frontend): organize entry HTML + bootstrap JS into folders

- Move entry HTML files: frontend/*.html -> frontend/html/*.html
- Move per-page bootstrap modules: src/{index,login,settings,inbounds,xray,subpage}.js -> src/entries/
- Update vite.config rollup inputs and dev-mode MIGRATED_ROUTES to /html/<page>.html
- Build output now lands at web/dist/html/<page>.html
- serveDistPage and subController updated to read from dist/html/

Cleans up the flat frontend/ root which previously interleaved 6 HTML
files with package.json, README, src/, etc. The src/ root similarly
gets rid of 6 entry .js files mixed in alongside api/, components/,
models/, etc.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* chore: remove obsolete vue3 phase1 inventory doc

The migration is well past phase 1 — the inventory doc has rotted
and the live state lives in the codebase plus the plan files.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* refactor(frontend): merge utils/legacy.js into utils/index.js

The barrel was a placeholder for an eventual split that hasn't
happened. Collapsing the two files removes one layer of indirection
and the misleading "legacy" name (the contents are still actively
used by the migrated SPA).

- Move all 930 lines from utils/legacy.js into utils/index.js
- Delete utils/legacy.js
- Update direct import in models/outbound.js to '@/utils'
- Drop a stale legacy.js reference in InboundFormModal comment

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* revert(frontend): keep entry HTML files at frontend/ root

The earlier move to frontend/html/ made dev-mode URLs ugly
(http://localhost:5173/html/index.html instead of plain /). The folder
didn't add real value — it just hid 6 files behind a non-conventional
layout. Reverting that piece while keeping src/entries/ (which is a
genuine separation between page bootstrap and the rest of src/).

- HTML files back at frontend/<page>.html
- Vite rollupOptions.input + MIGRATED_ROUTES restored to flat paths
- Build output is web/dist/<page>.html again
- web/controller/dist.go and sub/subController.go read from dist/<name>

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* build(frontend): bump eslint to 10 + add flat config + clean lint warnings

- Upgrade eslint 9.39 -> 10.3 and eslint-plugin-vue 9.33 -> 10.9
- Add eslint.config.js (flat config required by ESLint 10) with
  vue3-recommended rules, sensible defaults, and exemptions for the
  project's existing formatting style
- Drop --ext from the lint script (removed in ESLint 10)
- vue/no-mutating-props is left off because the form-modal pattern
  ports straight from Vue 2 (parent passes a reactive object, child
  mutates it); a real fix is an architectural rewire, separate task

Lint warning cleanup:
- utils/index.js: var -> let/const in the X25519 routines, replace
  obj.hasOwnProperty(...) with Object.prototype.hasOwnProperty.call(...)
- Remove unused imports (reactive, ref, Inbound) in ClientFormModal,
  InboundInfoModal, QrCodeModal, DnsServerModal, OutboundFormModal,
  SubPage; remove unused locals (isClientOnline, ONLINE_GRACE_MS,
  fetchAll, isSocks, isHTTP, _antdAlgorithm)
- XrayStatusCard: declare 'open-logs' on defineEmits (was emitted but
  not declared)
- RuleFormModal: rename v-for var t -> tag (shadowed useI18n's t)
- Drop stale eslint-disable directives (no-new, no-unused-vars)
- OutboundsTab/InboundList: drop redundant initial null assigns
- InboundInfoModal/OutboundFormModal: explicit eslint-disable for the
  intentional local-ref-shadows-prop pattern in modal drafts

`npm run lint` now passes with 0 errors and 0 warnings.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(inbounds): one client identity across multiple inbounds via subId

Lets the operator add the same email under the same subId to several
inbounds. Xray reports traffic per email, so a single client_traffics
row acts as the shared accumulator — no aggregation overhead, quota and
expiry stay consistent.

- Email validation allows duplicates only when subId matches
- AddClientStat upserts via OnConflict DoNothing (idempotent on rerun)
- Stat/IP rows survive client deletion when a sibling inbound still
  references the email
- enrichClientStats tops up GORM-preloaded stats with rows whose
  inbound_id points at a sibling, so every panel view sees usage
- disableInvalidClients cascades enable=false and syncs the row's
  total/expiry into every sibling JSON when the shared identity expires
- DelDepletedClients removes the depleted client from all referencing
  inbounds, batched
- Subscription services dedupe traffic by email so shared quota is
  counted once

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* docs(frontend): rewrite README for multi-page Vue 3 layout

Reflects the current state — embedded build, per-route HTML entries,
ESLint 10 flat config, src/ layout, and the steps to add a new page.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* build(frontend): drop deprecated rimraf/glob/inflight transitive deps

vue3-persian-datetime-picker pinned moment-jalaali to ^0.9.4, which
pulled rimraf@3 → glob@7 → inflight@1. inflight in particular leaks
memory and is unmaintained. Override moment-jalaali to ^0.10.4 (same
runtime API, dropped the legacy build deps) so npm install no longer
warns and the dep tree is 12 packages lighter.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(nodes): multi-node panel orchestration (CRUD, deployment, traffic sync, sub per-node)

- Node model + service + controller (/panel/api/nodes/*) with bearer-token apiToken auth
- Heartbeat job @every 10s; status/latency/xrayVersion surfaced in Nodes UI
- Runtime abstraction (Local + Remote) so inbound/client mutations target the
  inbound's owning node instead of always hitting the local xray
- Inbounds gain optional NodeID; tag-based correlation with remote panel (no
  RemoteInboundID column needed)
- NodeTrafficSyncJob @every 10s pulls absolute counters + online/lastOnline
  from each enabled+online node and writes them into central DB; 30s reset
  grace window prevents post-reset overwrite
- Reset propagation to nodes (best-effort) on client/inbound/all reset paths
- Subscription server uses node.Address for inbounds with NodeID, falling back
  to existing host resolution for local inbounds
- Frontend: Nodes page, "Deploy to" select in inbound form, Node column on
  inbound list, hostOverride threaded through genAllLinks/QR/Info modals

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(stats): system history modal + per-node CPU/Mem trends across all locales

Backend
- web/service/metric_history.go: generic in-memory ring buffer with two
  singletons — system-wide (cpu/mem/netUp/netDown/online/load1/5/15)
  and per-node (cpu/mem) keyed by node id
- ServerService.AppendStatusSample writes all 8 metrics every 2s on the
  same tick; AppendCpuSample/AggregateCpuHistory kept for back-compat
- NodeService.UpdateHeartbeat appends cpu/mem only on online ticks so
  offline gaps render as missing data, not phantom dips
- New routes: GET /panel/api/server/history/:metric/:bucket and
  GET /panel/api/nodes/history/:id/:metric/:bucket, both whitelisted

Frontend
- Sparkline component generalized: arbitrary value range (auto-scale
  when valueMax=null), pluggable yFormatter/tooltipFormatter for B/s,
  client counts, load averages
- SystemHistoryModal replaces CpuHistoryModal with tabs for every
  metric; opened from a tag on the 3X-UI card next to Documentation
- NodeHistoryPanel: expandable row on the Nodes table showing per-node
  CPU and Mem trends, refreshed every 15s

Localization
- Backfill systemHistoryTitle / trendLast2Min / pages.inbounds.{node,
  deployTo, localPanel} and the entire pages.nodes block (51 keys
  including statusValues + toasts) into all 11 non-en/fa locales:
  ar-EG, es-ES, id-ID, ja-JP, pt-BR, ru-RU, tr-TR, uk-UA, vi-VN,
  zh-CN, zh-TW

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(embed): include underscore-prefixed Vite chunks in dist FS

go:embed silently excludes files whose names start with `_` or `.`,
so the `_plugin-vue_export-helper-<hash>.js` chunk that Vite/rolldown
emits for @vitejs/plugin-vue was missing from the production binary.
First import at runtime hit a 404 and the SPA failed to mount — blank
page on every page load, no error in the server logs because the
asset 404 was just a static-handler miss.

Switched the directive to `//go:embed all:dist` which keeps the same
root layout but disables the underscore/dot exclusion rule. Dev mode
was unaffected (it serves dist/assets/ from disk, not the embedded FS).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* ci: build frontend bundle before Go compile in release.yml + Dockerfile

Phase 8 cut all panel HTML routes over to web/dist/ and embedded the
Vite bundle into the Go binary via //go:embed all:dist. web/dist/ is
.gitignored, so on a fresh CI checkout it doesn't exist — every Go
build since Phase 8 has been failing with "pattern dist: no matching
files found" or producing a binary that 404s on first asset request.

release.yml: add a setup-node@v4 + npm ci + npm run build trio before
the existing go build step in both the Linux matrix job (7 arches)
and the Windows job. npm cache is keyed on frontend/package-lock.json.

Dockerfile: add a node:22-alpine frontend stage that runs npm ci +
npm run build and emits to /src/web/dist (via vite.config.js's outDir).
The golang builder stage then COPY --from=frontend /src/web/dist into
./web/dist before the go build, so embed.FS sees the bundle.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(ws): live updates on inbounds/xray/nodes pages, drop polling + manual refresh

Replaces the legacy polling + manual-refresh model with WebSocket pushes
across the three live-data pages. The hub already broadcast traffic /
client_stats / outbounds; this wires the frontend to consume them and
adds a new `nodes` channel for the heartbeat job's snapshot.

Frontend
- new useWebSocket composable: page-scoped singleton WebSocketClient,
  lifecycle-managed on/off, leaves disconnect to page-unload
- inbounds: useInbounds gains applyTrafficEvent / applyClientStatsEvent
  / applyInvalidate that merge counters and online/lastOnline in place;
  InboundsPage subscribes; InboundList drops the auto-refresh popover,
  the refresh button, and the now-unused refreshing prop
- xray outbounds: useXraySetting gains applyOutboundsEvent; XrayPage
  subscribes; OutboundsTab drops the refresh button + emit
- nodes: useNodes gains applyNodesEvent and stops the 5s
  setInterval/visibilitychange polling; NodesPage subscribes;
  NodeList drops the refresh button and ReloadOutlined import

Backend
- web/websocket: new MessageTypeNodes + BroadcastNodes notifier
- node_heartbeat_job: after wg.Wait(), reload the table once and
  BroadcastNodes(updated). Gated on websocket.HasClients() so a panel
  with no open browser doesn't spend the DB read

Bug fixes spotted in this pass
- websocket.js #buildUrl defaulted basePath to '' when the global was
  missing (dev mode), producing `ws://host:portws` and a SyntaxError
  on the WebSocket constructor. Fall back to '/' and ensure leading
  slash.
- vite.config.js: forward /ws to ws://localhost:2053 with ws:true so
  dev (5173) reaches the Go backend's WebSocket
- NodeFormModal: a-input-password's visibilityToggle is Boolean in
  AntD Vue 4; the v3-era object form (`{ visible, 'onUpdate:visible' }`)
  triggered a Vue prop-type warning. Drop the override (default true
  shows the eye icon and toggles internally) and remove the orphaned
  tokenVisible ref

Translations
- pages.inbounds.autoRefresh / autoRefreshInterval: removed from all
  13 locales (UI gone)
- pages.nodes.refresh: removed from all 13 locales (UI gone)

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(inbounds): hide Node column when no nodes are defined

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

---------

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-09 17:47:35 +02:00

3822 lines
109 KiB
Go
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

// Package service provides business logic services for the 3x-ui web panel,
// including inbound/outbound management, user administration, settings, and Xray integration.
package service
import (
"context"
"encoding/json"
"fmt"
"sort"
"strconv"
"strings"
"time"
"github.com/google/uuid"
"github.com/mhsanaei/3x-ui/v2/database"
"github.com/mhsanaei/3x-ui/v2/database/model"
"github.com/mhsanaei/3x-ui/v2/logger"
"github.com/mhsanaei/3x-ui/v2/util/common"
"github.com/mhsanaei/3x-ui/v2/web/runtime"
"github.com/mhsanaei/3x-ui/v2/xray"
"gorm.io/gorm"
"gorm.io/gorm/clause"
)
// InboundService provides business logic for managing Xray inbound configurations.
// It handles CRUD operations for inbounds, client management, traffic monitoring,
// and integration with the Xray API for real-time updates.
type InboundService struct {
// xrayApi is retained for backwards compatibility with bulk paths
// that still talk to the local engine directly (e.g. traffic-reset
// jobs that scope to NodeID IS NULL inbounds anyway). New code paths
// route through runtimeFor() instead so they can target remote nodes.
xrayApi xray.XrayAPI
}
// runtimeFor returns the Runtime adapter for an inbound's destination
// engine. Returns the local runtime when the inbound has no NodeID
// (legacy/local inbounds); otherwise the cached Remote for that node.
//
// nil is returned only when the runtime Manager hasn't been wired yet
// (extremely early bootstrap). Callers treat nil as a transient error
// and either fall back to needRestart=true or surface "panel still
// starting" upstream.
func (s *InboundService) runtimeFor(ib *model.Inbound) (runtime.Runtime, error) {
mgr := runtime.GetManager()
if mgr == nil {
return nil, fmt.Errorf("runtime manager not initialised")
}
return mgr.RuntimeFor(ib.NodeID)
}
type CopyClientsResult struct {
Added []string `json:"added"`
Skipped []string `json:"skipped"`
Errors []string `json:"errors"`
}
// enrichClientStats parses each inbound's clients once, fills in the
// UUID/SubId fields on the preloaded ClientStats, and tops up rows owned by
// a sibling inbound (shared-email mode — the row is keyed on email so it
// only preloads on its owning inbound).
func (s *InboundService) enrichClientStats(db *gorm.DB, inbounds []*model.Inbound) {
if len(inbounds) == 0 {
return
}
clientsByInbound := make([][]model.Client, len(inbounds))
seenByInbound := make([]map[string]struct{}, len(inbounds))
missing := make(map[string]struct{})
for i, inbound := range inbounds {
clients, _ := s.GetClients(inbound)
clientsByInbound[i] = clients
seen := make(map[string]struct{}, len(inbound.ClientStats))
for _, st := range inbound.ClientStats {
if st.Email != "" {
seen[strings.ToLower(st.Email)] = struct{}{}
}
}
seenByInbound[i] = seen
for _, c := range clients {
if c.Email == "" {
continue
}
if _, ok := seen[strings.ToLower(c.Email)]; !ok {
missing[c.Email] = struct{}{}
}
}
}
if len(missing) > 0 {
emails := make([]string, 0, len(missing))
for e := range missing {
emails = append(emails, e)
}
var extra []xray.ClientTraffic
if err := db.Model(xray.ClientTraffic{}).Where("email IN ?", emails).Find(&extra).Error; err != nil {
logger.Warning("enrichClientStats:", err)
} else {
byEmail := make(map[string]xray.ClientTraffic, len(extra))
for _, st := range extra {
byEmail[strings.ToLower(st.Email)] = st
}
for i, inbound := range inbounds {
for _, c := range clientsByInbound[i] {
if c.Email == "" {
continue
}
key := strings.ToLower(c.Email)
if _, ok := seenByInbound[i][key]; ok {
continue
}
if st, ok := byEmail[key]; ok {
inbound.ClientStats = append(inbound.ClientStats, st)
seenByInbound[i][key] = struct{}{}
}
}
}
}
}
for i, inbound := range inbounds {
clients := clientsByInbound[i]
if len(clients) == 0 || len(inbound.ClientStats) == 0 {
continue
}
cMap := make(map[string]model.Client, len(clients))
for _, c := range clients {
cMap[strings.ToLower(c.Email)] = c
}
for j := range inbound.ClientStats {
email := strings.ToLower(inbound.ClientStats[j].Email)
if c, ok := cMap[email]; ok {
inbound.ClientStats[j].UUID = c.ID
inbound.ClientStats[j].SubId = c.SubID
}
}
}
}
// GetInbounds retrieves all inbounds for a specific user with client stats.
func (s *InboundService) GetInbounds(userId int) ([]*model.Inbound, error) {
db := database.GetDB()
var inbounds []*model.Inbound
err := db.Model(model.Inbound{}).Preload("ClientStats").Where("user_id = ?", userId).Find(&inbounds).Error
if err != nil && err != gorm.ErrRecordNotFound {
return nil, err
}
s.enrichClientStats(db, inbounds)
return inbounds, nil
}
// GetAllInbounds retrieves all inbounds with client stats.
func (s *InboundService) GetAllInbounds() ([]*model.Inbound, error) {
db := database.GetDB()
var inbounds []*model.Inbound
err := db.Model(model.Inbound{}).Preload("ClientStats").Find(&inbounds).Error
if err != nil && err != gorm.ErrRecordNotFound {
return nil, err
}
s.enrichClientStats(db, inbounds)
return inbounds, nil
}
func (s *InboundService) GetInboundsByTrafficReset(period string) ([]*model.Inbound, error) {
db := database.GetDB()
var inbounds []*model.Inbound
err := db.Model(model.Inbound{}).Where("traffic_reset = ?", period).Find(&inbounds).Error
if err != nil && err != gorm.ErrRecordNotFound {
return nil, err
}
return inbounds, nil
}
func (s *InboundService) GetClients(inbound *model.Inbound) ([]model.Client, error) {
settings := map[string][]model.Client{}
json.Unmarshal([]byte(inbound.Settings), &settings)
if settings == nil {
return nil, fmt.Errorf("setting is null")
}
clients := settings["clients"]
if clients == nil {
return nil, nil
}
return clients, nil
}
func (s *InboundService) getAllEmails() ([]string, error) {
db := database.GetDB()
var emails []string
err := db.Raw(`
SELECT DISTINCT JSON_EXTRACT(client.value, '$.email')
FROM inbounds,
JSON_EACH(JSON_EXTRACT(inbounds.settings, '$.clients')) AS client
`).Scan(&emails).Error
if err != nil {
return nil, err
}
return emails, nil
}
// getAllEmailSubIDs returns email→subId. An email seen with two different
// non-empty subIds is locked (mapped to "") so neither identity can claim it.
func (s *InboundService) getAllEmailSubIDs() (map[string]string, error) {
db := database.GetDB()
var rows []struct {
Email string
SubID string
}
err := db.Raw(`
SELECT JSON_EXTRACT(client.value, '$.email') AS email,
JSON_EXTRACT(client.value, '$.subId') AS sub_id
FROM inbounds,
JSON_EACH(JSON_EXTRACT(inbounds.settings, '$.clients')) AS client
`).Scan(&rows).Error
if err != nil {
return nil, err
}
result := make(map[string]string, len(rows))
for _, r := range rows {
email := strings.ToLower(strings.Trim(r.Email, "\""))
if email == "" {
continue
}
subID := strings.Trim(r.SubID, "\"")
if existing, ok := result[email]; ok {
if existing != subID {
result[email] = ""
}
continue
}
result[email] = subID
}
return result, nil
}
func lowerAll(in []string) []string {
out := make([]string, len(in))
for i, s := range in {
out[i] = strings.ToLower(s)
}
return out
}
// emailUsedByOtherInbounds reports whether email lives in any inbound other
// than exceptInboundId. Empty email returns false.
func (s *InboundService) emailUsedByOtherInbounds(email string, exceptInboundId int) (bool, error) {
if email == "" {
return false, nil
}
db := database.GetDB()
var count int64
err := db.Raw(`
SELECT COUNT(*)
FROM inbounds,
JSON_EACH(JSON_EXTRACT(inbounds.settings, '$.clients')) AS client
WHERE inbounds.id != ?
AND LOWER(JSON_EXTRACT(client.value, '$.email')) = LOWER(?)
`, exceptInboundId, email).Scan(&count).Error
if err != nil {
return false, err
}
return count > 0, nil
}
// checkEmailsExistForClients validates a batch of incoming clients. An email
// collides only when the existing holder has a different (or empty) subId —
// matching non-empty subIds let multiple inbounds share one identity.
func (s *InboundService) checkEmailsExistForClients(clients []model.Client) (string, error) {
emailSubIDs, err := s.getAllEmailSubIDs()
if err != nil {
return "", err
}
seen := make(map[string]string, len(clients))
for _, client := range clients {
if client.Email == "" {
continue
}
key := strings.ToLower(client.Email)
// Within the same payload, the same email must carry the same subId;
// otherwise we would silently merge two distinct identities.
if prev, ok := seen[key]; ok {
if prev != client.SubID || client.SubID == "" {
return client.Email, nil
}
continue
}
seen[key] = client.SubID
if existingSub, ok := emailSubIDs[key]; ok {
if client.SubID == "" || existingSub == "" || existingSub != client.SubID {
return client.Email, nil
}
}
}
return "", nil
}
// AddInbound creates a new inbound configuration.
// It validates port uniqueness, client email uniqueness, and required fields,
// then saves the inbound to the database and optionally adds it to the running Xray instance.
// Returns the created inbound, whether Xray needs restart, and any error.
func (s *InboundService) AddInbound(inbound *model.Inbound) (*model.Inbound, bool, error) {
exist, err := s.checkPortConflict(inbound, 0)
if err != nil {
return inbound, false, err
}
if exist {
return inbound, false, common.NewError("Port already exists:", inbound.Port)
}
// pick a tag that won't collide with an existing row. for the common
// case this is the same "inbound-<port>" string the controller already
// set; only when this port already has another inbound on a different
// transport (now possible after the transport-aware port check) does
// this disambiguate with a -tcp/-udp suffix. see #4103.
inbound.Tag, err = s.generateInboundTag(inbound, 0)
if err != nil {
return inbound, false, err
}
clients, err := s.GetClients(inbound)
if err != nil {
return inbound, false, err
}
existEmail, err := s.checkEmailsExistForClients(clients)
if err != nil {
return inbound, false, err
}
if existEmail != "" {
return inbound, false, common.NewError("Duplicate email:", existEmail)
}
// Ensure created_at and updated_at on clients in settings
if len(clients) > 0 {
var settings map[string]any
if err2 := json.Unmarshal([]byte(inbound.Settings), &settings); err2 == nil && settings != nil {
now := time.Now().Unix() * 1000
updatedClients := make([]model.Client, 0, len(clients))
for _, c := range clients {
if c.CreatedAt == 0 {
c.CreatedAt = now
}
c.UpdatedAt = now
updatedClients = append(updatedClients, c)
}
settings["clients"] = updatedClients
if bs, err3 := json.MarshalIndent(settings, "", " "); err3 == nil {
inbound.Settings = string(bs)
} else {
logger.Debug("Unable to marshal inbound settings with timestamps:", err3)
}
} else if err2 != nil {
logger.Debug("Unable to parse inbound settings for timestamps:", err2)
}
}
// Secure client ID
for _, client := range clients {
switch inbound.Protocol {
case "trojan":
if client.Password == "" {
return inbound, false, common.NewError("empty client ID")
}
case "shadowsocks":
if client.Email == "" {
return inbound, false, common.NewError("empty client ID")
}
case "hysteria", "hysteria2":
if client.Auth == "" {
return inbound, false, common.NewError("empty client ID")
}
default:
if client.ID == "" {
return inbound, false, common.NewError("empty client ID")
}
}
}
db := database.GetDB()
tx := db.Begin()
defer func() {
if err == nil {
tx.Commit()
} else {
tx.Rollback()
}
}()
err = tx.Save(inbound).Error
if err == nil {
if len(inbound.ClientStats) == 0 {
for _, client := range clients {
s.AddClientStat(tx, inbound.Id, &client)
}
}
} else {
return inbound, false, err
}
needRestart := false
if inbound.Enable {
rt, rterr := s.runtimeFor(inbound)
if rterr != nil {
// Fail-fast on remote routing errors. Assign to the named
// `err` so the deferred tx handler rolls back the central
// DB row that tx.Save just inserted — otherwise we'd leave
// an orphan that the user sees succeed despite the toast.
err = rterr
return inbound, false, err
}
if err1 := rt.AddInbound(context.Background(), inbound); err1 == nil {
logger.Debug("New inbound added on", rt.Name(), ":", inbound.Tag)
} else {
logger.Debug("Unable to add inbound on", rt.Name(), ":", err1)
if inbound.NodeID != nil {
// Remote add failed — roll back so central + node stay
// in sync (no row on either side).
err = err1
return inbound, false, err
}
// Local: keep the existing fall-through-to-restart behaviour.
needRestart = true
}
}
return inbound, needRestart, err
}
// DelInbound deletes an inbound configuration by ID.
// It removes the inbound from the database and the running Xray instance if active.
// Returns whether Xray needs restart and any error.
func (s *InboundService) DelInbound(id int) (bool, error) {
db := database.GetDB()
needRestart := false
// Load the full inbound (not just the tag) so we know its NodeID and
// can route the runtime call to the right engine. Skip-on-not-found
// preserves the old "no-op when DB row doesn't exist" behaviour.
var ib model.Inbound
loadErr := db.Model(model.Inbound{}).Where("id = ? and enable = ?", id, true).First(&ib).Error
if loadErr == nil {
// Delete is best-effort on the runtime side: the user's intent is
// to get rid of the inbound, so a missing node row, an offline
// node, or a remote-side "already gone" should NEVER block the
// central DB cleanup. Worst case the remote keeps an orphan that
// the user can clean up manually — far less painful than the row
// being stuck on central.
rt, rterr := s.runtimeFor(&ib)
if rterr != nil {
logger.Warning("DelInbound: runtime lookup failed, deleting central row anyway:", rterr)
if ib.NodeID == nil {
needRestart = true
}
} else if err1 := rt.DelInbound(context.Background(), &ib); err1 == nil {
logger.Debug("Inbound deleted on", rt.Name(), ":", ib.Tag)
} else {
logger.Warning("DelInbound on", rt.Name(), "failed, deleting central row anyway:", err1)
if ib.NodeID == nil {
needRestart = true
}
}
} else {
logger.Debug("No enabled inbound found to remove by api, id:", id)
}
// Delete client traffics of inbounds
err := db.Where("inbound_id = ?", id).Delete(xray.ClientTraffic{}).Error
if err != nil {
return false, err
}
inbound, err := s.GetInbound(id)
if err != nil {
return false, err
}
clients, err := s.GetClients(inbound)
if err != nil {
return false, err
}
// Bulk-delete client IPs for every email in this inbound. The previous
// per-client loop fired one DELETE per row — at 7k+ clients that meant
// thousands of synchronous SQL roundtrips and a multi-second freeze.
// Chunked to stay under SQLite's bind-variable limit on huge inbounds.
if len(clients) > 0 {
emails := make([]string, 0, len(clients))
for i := range clients {
if clients[i].Email != "" {
emails = append(emails, clients[i].Email)
}
}
for _, batch := range chunkStrings(uniqueNonEmptyStrings(emails), sqliteMaxVars) {
if err := db.Where("client_email IN ?", batch).
Delete(model.InboundClientIps{}).Error; err != nil {
return false, err
}
}
}
return needRestart, db.Delete(model.Inbound{}, id).Error
}
func (s *InboundService) GetInbound(id int) (*model.Inbound, error) {
db := database.GetDB()
inbound := &model.Inbound{}
err := db.Model(model.Inbound{}).First(inbound, id).Error
if err != nil {
return nil, err
}
return inbound, nil
}
// SetInboundEnable toggles only the enable flag of an inbound, without
// rewriting the (potentially multi-MB) settings JSON. Used by the UI's
// per-row enable switch — for inbounds with thousands of clients the full
// UpdateInbound path is an order of magnitude too slow for an interactive
// toggle (parses + reserialises every client, runs O(N) traffic diff).
//
// Returns (needRestart, error). needRestart is true when the xray runtime
// could not be re-synced from the cached config and a full restart is
// required to pick up the change.
func (s *InboundService) SetInboundEnable(id int, enable bool) (bool, error) {
inbound, err := s.GetInbound(id)
if err != nil {
return false, err
}
if inbound.Enable == enable {
return false, nil
}
db := database.GetDB()
if err := db.Model(model.Inbound{}).Where("id = ?", id).
Update("enable", enable).Error; err != nil {
return false, err
}
inbound.Enable = enable
// Sync xray runtime via the Runtime adapter. For local inbounds we
// also rebuild the runtime config (drops clients flagged as disabled
// in ClientTraffic) so the live xray sees the same filtered view it
// did pre-refactor. Remote runtimes ship the unfiltered inbound —
// the remote panel does its own filtering before pushing to its xray.
needRestart := false
rt, rterr := s.runtimeFor(inbound)
if rterr != nil {
if inbound.NodeID != nil {
return false, rterr
}
return true, nil
}
if err := rt.DelInbound(context.Background(), inbound); err != nil &&
!strings.Contains(err.Error(), "not found") {
logger.Debug("SetInboundEnable: DelInbound on", rt.Name(), "failed:", err)
needRestart = true
}
if !enable {
return needRestart, nil
}
addTarget := inbound
if inbound.NodeID == nil {
runtimeInbound, err := s.buildRuntimeInboundForAPI(db, inbound)
if err != nil {
logger.Debug("SetInboundEnable: build runtime config failed:", err)
return true, nil
}
addTarget = runtimeInbound
}
if err := rt.AddInbound(context.Background(), addTarget); err != nil {
logger.Debug("SetInboundEnable: AddInbound on", rt.Name(), "failed:", err)
if inbound.NodeID != nil {
return false, err
}
needRestart = true
}
return needRestart, nil
}
// UpdateInbound modifies an existing inbound configuration.
// It validates changes, updates the database, and syncs with the running Xray instance.
// Returns the updated inbound, whether Xray needs restart, and any error.
func (s *InboundService) UpdateInbound(inbound *model.Inbound) (*model.Inbound, bool, error) {
exist, err := s.checkPortConflict(inbound, inbound.Id)
if err != nil {
return inbound, false, err
}
if exist {
return inbound, false, common.NewError("Port already exists:", inbound.Port)
}
oldInbound, err := s.GetInbound(inbound.Id)
if err != nil {
return inbound, false, err
}
tag := oldInbound.Tag
db := database.GetDB()
tx := db.Begin()
defer func() {
if err != nil {
tx.Rollback()
} else {
tx.Commit()
}
}()
err = s.updateClientTraffics(tx, oldInbound, inbound)
if err != nil {
return inbound, false, err
}
// Ensure created_at and updated_at exist in inbound.Settings clients
{
var oldSettings map[string]any
_ = json.Unmarshal([]byte(oldInbound.Settings), &oldSettings)
emailToCreated := map[string]int64{}
emailToUpdated := map[string]int64{}
if oldSettings != nil {
if oc, ok := oldSettings["clients"].([]any); ok {
for _, it := range oc {
if m, ok2 := it.(map[string]any); ok2 {
if email, ok3 := m["email"].(string); ok3 {
switch v := m["created_at"].(type) {
case float64:
emailToCreated[email] = int64(v)
case int64:
emailToCreated[email] = v
}
switch v := m["updated_at"].(type) {
case float64:
emailToUpdated[email] = int64(v)
case int64:
emailToUpdated[email] = v
}
}
}
}
}
}
var newSettings map[string]any
if err2 := json.Unmarshal([]byte(inbound.Settings), &newSettings); err2 == nil && newSettings != nil {
now := time.Now().Unix() * 1000
if nSlice, ok := newSettings["clients"].([]any); ok {
for i := range nSlice {
if m, ok2 := nSlice[i].(map[string]any); ok2 {
email, _ := m["email"].(string)
if _, ok3 := m["created_at"]; !ok3 {
if v, ok4 := emailToCreated[email]; ok4 && v > 0 {
m["created_at"] = v
} else {
m["created_at"] = now
}
}
// Preserve client's updated_at if present; do not bump on parent inbound update
if _, hasUpdated := m["updated_at"]; !hasUpdated {
if v, ok4 := emailToUpdated[email]; ok4 && v > 0 {
m["updated_at"] = v
}
}
nSlice[i] = m
}
}
newSettings["clients"] = nSlice
if bs, err3 := json.MarshalIndent(newSettings, "", " "); err3 == nil {
inbound.Settings = string(bs)
}
}
}
}
oldInbound.Up = inbound.Up
oldInbound.Down = inbound.Down
oldInbound.Total = inbound.Total
oldInbound.Remark = inbound.Remark
oldInbound.Enable = inbound.Enable
oldInbound.ExpiryTime = inbound.ExpiryTime
oldInbound.TrafficReset = inbound.TrafficReset
oldInbound.Listen = inbound.Listen
oldInbound.Port = inbound.Port
oldInbound.Protocol = inbound.Protocol
oldInbound.Settings = inbound.Settings
oldInbound.StreamSettings = inbound.StreamSettings
oldInbound.Sniffing = inbound.Sniffing
// regenerate tag with collision-aware logic. for this row we pass
// inbound.Id as ignoreId so it doesn't see its own old tag in the db.
oldInbound.Tag, err = s.generateInboundTag(inbound, inbound.Id)
if err != nil {
return inbound, false, err
}
needRestart := false
rt, rterr := s.runtimeFor(oldInbound)
if rterr != nil {
if oldInbound.NodeID != nil {
err = rterr
return inbound, false, err
}
needRestart = true
} else {
// Use a snapshot of the OLD tag so the remote can resolve its
// remote-id even when the new tag has changed (port/listen edit).
oldSnapshot := *oldInbound
oldSnapshot.Tag = tag
if oldInbound.NodeID == nil {
// Local: keep the old del-then-add-filtered behaviour to
// preserve runtime client filtering.
if err2 := rt.DelInbound(context.Background(), &oldSnapshot); err2 == nil {
logger.Debug("Old inbound deleted on", rt.Name(), ":", tag)
}
if inbound.Enable {
runtimeInbound, err2 := s.buildRuntimeInboundForAPI(tx, oldInbound)
if err2 != nil {
logger.Debug("Unable to prepare runtime inbound config:", err2)
needRestart = true
} else if err2 := rt.AddInbound(context.Background(), runtimeInbound); err2 == nil {
logger.Debug("Updated inbound added on", rt.Name(), ":", oldInbound.Tag)
} else {
logger.Debug("Unable to update inbound on", rt.Name(), ":", err2)
needRestart = true
}
}
} else {
// Remote: a single UpdateInbound call (the Remote adapter
// resolves remote-id by old tag, then POSTs /update/{id}).
// Assign to the outer `err` on failure so the deferred tx
// handler rolls back the central DB write.
if !inbound.Enable {
if err2 := rt.DelInbound(context.Background(), &oldSnapshot); err2 != nil {
err = err2
return inbound, false, err
}
} else if err2 := rt.UpdateInbound(context.Background(), &oldSnapshot, oldInbound); err2 != nil {
err = err2
return inbound, false, err
}
}
}
return inbound, needRestart, tx.Save(oldInbound).Error
}
func (s *InboundService) buildRuntimeInboundForAPI(tx *gorm.DB, inbound *model.Inbound) (*model.Inbound, error) {
if inbound == nil {
return nil, fmt.Errorf("inbound is nil")
}
runtimeInbound := *inbound
settings := map[string]any{}
if err := json.Unmarshal([]byte(inbound.Settings), &settings); err != nil {
return nil, err
}
clients, ok := settings["clients"].([]any)
if !ok {
return &runtimeInbound, nil
}
var clientStats []xray.ClientTraffic
err := tx.Model(xray.ClientTraffic{}).
Where("inbound_id = ?", inbound.Id).
Select("email", "enable").
Find(&clientStats).Error
if err != nil {
return nil, err
}
enableMap := make(map[string]bool, len(clientStats))
for _, clientTraffic := range clientStats {
enableMap[clientTraffic.Email] = clientTraffic.Enable
}
finalClients := make([]any, 0, len(clients))
for _, client := range clients {
c, ok := client.(map[string]any)
if !ok {
continue
}
email, _ := c["email"].(string)
if enable, exists := enableMap[email]; exists && !enable {
continue
}
if manualEnable, ok := c["enable"].(bool); ok && !manualEnable {
continue
}
finalClients = append(finalClients, c)
}
settings["clients"] = finalClients
modifiedSettings, err := json.MarshalIndent(settings, "", " ")
if err != nil {
return nil, err
}
runtimeInbound.Settings = string(modifiedSettings)
return &runtimeInbound, nil
}
// updateClientTraffics syncs the ClientTraffic rows with the inbound's clients
// list: removes rows for emails that disappeared, inserts rows for newly-added
// emails. Uses sets for O(N) lookup — the previous nested-loop implementation
// was O(N²) and degraded into multi-second pauses on inbounds with thousands
// of clients (toggling, saving, or deleting any such inbound felt frozen).
func (s *InboundService) updateClientTraffics(tx *gorm.DB, oldInbound *model.Inbound, newInbound *model.Inbound) error {
oldClients, err := s.GetClients(oldInbound)
if err != nil {
return err
}
newClients, err := s.GetClients(newInbound)
if err != nil {
return err
}
// Email is the unique key for ClientTraffic rows. Clients without an
// email have no stats row to sync — skip them on both sides instead of
// risking a unique-constraint hit or accidental delete of an unrelated row.
oldEmails := make(map[string]struct{}, len(oldClients))
for i := range oldClients {
if oldClients[i].Email == "" {
continue
}
oldEmails[oldClients[i].Email] = struct{}{}
}
newEmails := make(map[string]struct{}, len(newClients))
for i := range newClients {
if newClients[i].Email == "" {
continue
}
newEmails[newClients[i].Email] = struct{}{}
}
// Drop stats rows for removed emails — but not when a sibling inbound
// still references the email, since the row is the shared accumulator.
for i := range oldClients {
email := oldClients[i].Email
if email == "" {
continue
}
if _, kept := newEmails[email]; kept {
continue
}
stillUsed, err := s.emailUsedByOtherInbounds(email, oldInbound.Id)
if err != nil {
return err
}
if stillUsed {
continue
}
if err := s.DelClientStat(tx, email); err != nil {
return err
}
}
// Added clients — create their stats rows.
for i := range newClients {
email := newClients[i].Email
if email == "" {
continue
}
if _, existed := oldEmails[email]; existed {
continue
}
if err := s.AddClientStat(tx, oldInbound.Id, &newClients[i]); err != nil {
return err
}
}
return nil
}
func (s *InboundService) AddInboundClient(data *model.Inbound) (bool, error) {
clients, err := s.GetClients(data)
if err != nil {
return false, err
}
var settings map[string]any
err = json.Unmarshal([]byte(data.Settings), &settings)
if err != nil {
return false, err
}
interfaceClients := settings["clients"].([]any)
// Add timestamps for new clients being appended
nowTs := time.Now().Unix() * 1000
for i := range interfaceClients {
if cm, ok := interfaceClients[i].(map[string]any); ok {
if _, ok2 := cm["created_at"]; !ok2 {
cm["created_at"] = nowTs
}
cm["updated_at"] = nowTs
interfaceClients[i] = cm
}
}
existEmail, err := s.checkEmailsExistForClients(clients)
if err != nil {
return false, err
}
if existEmail != "" {
return false, common.NewError("Duplicate email:", existEmail)
}
oldInbound, err := s.GetInbound(data.Id)
if err != nil {
return false, err
}
// Secure client ID
for _, client := range clients {
switch oldInbound.Protocol {
case "trojan":
if client.Password == "" {
return false, common.NewError("empty client ID")
}
case "shadowsocks":
if client.Email == "" {
return false, common.NewError("empty client ID")
}
case "hysteria", "hysteria2":
if client.Auth == "" {
return false, common.NewError("empty client ID")
}
default:
if client.ID == "" {
return false, common.NewError("empty client ID")
}
}
}
var oldSettings map[string]any
err = json.Unmarshal([]byte(oldInbound.Settings), &oldSettings)
if err != nil {
return false, err
}
oldClients := oldSettings["clients"].([]any)
oldClients = append(oldClients, interfaceClients...)
oldSettings["clients"] = oldClients
newSettings, err := json.MarshalIndent(oldSettings, "", " ")
if err != nil {
return false, err
}
oldInbound.Settings = string(newSettings)
db := database.GetDB()
tx := db.Begin()
defer func() {
if err != nil {
tx.Rollback()
} else {
tx.Commit()
}
}()
needRestart := false
rt, rterr := s.runtimeFor(oldInbound)
if rterr != nil {
if oldInbound.NodeID != nil {
err = rterr
return false, err
}
needRestart = true
} else if oldInbound.NodeID == nil {
// Local: per-client AddUser keeps existing connections alive
// (incremental hot-add). Walk every new client; on any failure
// fall back to needRestart so cron rebuilds from scratch.
for _, client := range clients {
if len(client.Email) == 0 {
needRestart = true
continue
}
s.AddClientStat(tx, data.Id, &client)
if !client.Enable {
continue
}
cipher := ""
if oldInbound.Protocol == "shadowsocks" {
cipher = oldSettings["method"].(string)
}
err1 := rt.AddUser(context.Background(), oldInbound, map[string]any{
"email": client.Email,
"id": client.ID,
"auth": client.Auth,
"security": client.Security,
"flow": client.Flow,
"password": client.Password,
"cipher": cipher,
})
if err1 == nil {
logger.Debug("Client added on", rt.Name(), ":", client.Email)
} else {
logger.Debug("Error in adding client on", rt.Name(), ":", err1)
needRestart = true
}
}
} else {
// Remote: a single UpdateInbound ships the new clients in one
// HTTP round-trip rather than N. Settings are already mutated
// in-memory (oldInbound.Settings) so the remote sees the final
// state. Per-client ClientStat rows still need the central DB
// update so the loop runs that branch first.
for _, client := range clients {
if len(client.Email) > 0 {
s.AddClientStat(tx, data.Id, &client)
}
}
if err1 := rt.UpdateInbound(context.Background(), oldInbound, oldInbound); err1 != nil {
err = err1
return false, err
}
}
return needRestart, tx.Save(oldInbound).Error
}
func (s *InboundService) getClientPrimaryKey(protocol model.Protocol, client model.Client) string {
switch protocol {
case model.Trojan:
return client.Password
case model.Shadowsocks:
return client.Email
case model.Hysteria:
return client.Auth
default:
return client.ID
}
}
func (s *InboundService) writeBackClientSubID(sourceInboundID int, sourceProtocol model.Protocol, client model.Client, subID string) (bool, error) {
client.SubID = subID
client.UpdatedAt = time.Now().UnixMilli()
clientID := s.getClientPrimaryKey(sourceProtocol, client)
if clientID == "" {
return false, common.NewError("empty client ID")
}
settingsBytes, err := json.Marshal(map[string][]model.Client{
"clients": {client},
})
if err != nil {
return false, err
}
updatePayload := &model.Inbound{
Id: sourceInboundID,
Settings: string(settingsBytes),
}
return s.UpdateInboundClient(updatePayload, clientID)
}
func (s *InboundService) generateRandomCredential(targetProtocol model.Protocol) string {
switch targetProtocol {
case model.VMESS, model.VLESS:
return uuid.NewString()
default:
return strings.ReplaceAll(uuid.NewString(), "-", "")
}
}
func (s *InboundService) buildTargetClientFromSource(source model.Client, targetProtocol model.Protocol, email string, flow string) (model.Client, error) {
nowTs := time.Now().UnixMilli()
target := source
target.Email = email
target.CreatedAt = nowTs
target.UpdatedAt = nowTs
target.ID = ""
target.Password = ""
target.Auth = ""
target.Flow = ""
switch targetProtocol {
case model.VMESS:
target.ID = s.generateRandomCredential(targetProtocol)
case model.VLESS:
target.ID = s.generateRandomCredential(targetProtocol)
if flow == "xtls-rprx-vision" || flow == "xtls-rprx-vision-udp443" {
target.Flow = flow
}
case model.Trojan, model.Shadowsocks:
target.Password = s.generateRandomCredential(targetProtocol)
case model.Hysteria:
target.Auth = s.generateRandomCredential(targetProtocol)
default:
target.ID = s.generateRandomCredential(targetProtocol)
}
return target, nil
}
func (s *InboundService) nextAvailableCopiedEmail(originalEmail string, targetID int, occupied map[string]struct{}) string {
base := fmt.Sprintf("%s_%d", originalEmail, targetID)
candidate := base
suffix := 0
for {
if _, exists := occupied[strings.ToLower(candidate)]; !exists {
occupied[strings.ToLower(candidate)] = struct{}{}
return candidate
}
suffix++
candidate = fmt.Sprintf("%s_%d", base, suffix)
}
}
func (s *InboundService) CopyInboundClients(targetInboundID int, sourceInboundID int, clientEmails []string, flow string) (*CopyClientsResult, bool, error) {
result := &CopyClientsResult{
Added: []string{},
Skipped: []string{},
Errors: []string{},
}
if targetInboundID == sourceInboundID {
return result, false, common.NewError("source and target inbounds must be different")
}
targetInbound, err := s.GetInbound(targetInboundID)
if err != nil {
return result, false, err
}
sourceInbound, err := s.GetInbound(sourceInboundID)
if err != nil {
return result, false, err
}
sourceClients, err := s.GetClients(sourceInbound)
if err != nil {
return result, false, err
}
if len(sourceClients) == 0 {
return result, false, nil
}
allowedEmails := map[string]struct{}{}
if len(clientEmails) > 0 {
for _, email := range clientEmails {
allowedEmails[strings.ToLower(strings.TrimSpace(email))] = struct{}{}
}
}
occupiedEmails := map[string]struct{}{}
allEmails, err := s.getAllEmails()
if err != nil {
return result, false, err
}
for _, email := range allEmails {
clean := strings.Trim(email, "\"")
if clean != "" {
occupiedEmails[strings.ToLower(clean)] = struct{}{}
}
}
newClients := make([]model.Client, 0)
needRestart := false
for _, sourceClient := range sourceClients {
originalEmail := strings.TrimSpace(sourceClient.Email)
if originalEmail == "" {
continue
}
if len(allowedEmails) > 0 {
if _, ok := allowedEmails[strings.ToLower(originalEmail)]; !ok {
continue
}
}
if sourceClient.SubID == "" {
newSubID := uuid.NewString()
subNeedRestart, subErr := s.writeBackClientSubID(sourceInbound.Id, sourceInbound.Protocol, sourceClient, newSubID)
if subErr != nil {
result.Errors = append(result.Errors, fmt.Sprintf("%s: failed to write source subId: %v", originalEmail, subErr))
continue
}
if subNeedRestart {
needRestart = true
}
sourceClient.SubID = newSubID
}
targetEmail := s.nextAvailableCopiedEmail(originalEmail, targetInboundID, occupiedEmails)
targetClient, buildErr := s.buildTargetClientFromSource(sourceClient, targetInbound.Protocol, targetEmail, flow)
if buildErr != nil {
result.Errors = append(result.Errors, fmt.Sprintf("%s: %v", originalEmail, buildErr))
continue
}
newClients = append(newClients, targetClient)
result.Added = append(result.Added, targetEmail)
}
if len(newClients) == 0 {
return result, needRestart, nil
}
settingsPayload, err := json.Marshal(map[string][]model.Client{
"clients": newClients,
})
if err != nil {
return result, needRestart, err
}
addNeedRestart, err := s.AddInboundClient(&model.Inbound{
Id: targetInboundID,
Settings: string(settingsPayload),
})
if err != nil {
return result, needRestart, err
}
if addNeedRestart {
needRestart = true
}
return result, needRestart, nil
}
func (s *InboundService) DelInboundClient(inboundId int, clientId string) (bool, error) {
oldInbound, err := s.GetInbound(inboundId)
if err != nil {
logger.Error("Load Old Data Error")
return false, err
}
var settings map[string]any
err = json.Unmarshal([]byte(oldInbound.Settings), &settings)
if err != nil {
return false, err
}
email := ""
client_key := "id"
switch oldInbound.Protocol {
case "trojan":
client_key = "password"
case "shadowsocks":
client_key = "email"
case "hysteria", "hysteria2":
client_key = "auth"
}
interfaceClients := settings["clients"].([]any)
var newClients []any
needApiDel := false
clientFound := false
for _, client := range interfaceClients {
c := client.(map[string]any)
c_id := c[client_key].(string)
if c_id == clientId {
clientFound = true
email, _ = c["email"].(string)
needApiDel, _ = c["enable"].(bool)
} else {
newClients = append(newClients, client)
}
}
if !clientFound {
return false, common.NewError("Client Not Found In Inbound For ID:", clientId)
}
if len(newClients) == 0 {
return false, common.NewError("no client remained in Inbound")
}
settings["clients"] = newClients
newSettings, err := json.MarshalIndent(settings, "", " ")
if err != nil {
return false, err
}
oldInbound.Settings = string(newSettings)
db := database.GetDB()
// Keep the client_traffics row and IPs alive when another inbound still
// references this email — siblings depend on the shared accounting state.
emailShared, err := s.emailUsedByOtherInbounds(email, inboundId)
if err != nil {
return false, err
}
if !emailShared {
err = s.DelClientIPs(db, email)
if err != nil {
logger.Error("Error in delete client IPs")
return false, err
}
}
needRestart := false
if len(email) > 0 {
notDepleted := true
err = db.Model(xray.ClientTraffic{}).Select("enable").Where("email = ?", email).First(&notDepleted).Error
if err != nil {
logger.Error("Get stats error")
return false, err
}
if !emailShared {
err = s.DelClientStat(db, email)
if err != nil {
logger.Error("Delete stats Data Error")
return false, err
}
}
if needApiDel && notDepleted {
rt, rterr := s.runtimeFor(oldInbound)
if rterr != nil {
if oldInbound.NodeID != nil {
return false, rterr
}
needRestart = true
} else if oldInbound.NodeID == nil {
err1 := rt.RemoveUser(context.Background(), oldInbound, email)
if err1 == nil {
logger.Debug("Client deleted on", rt.Name(), ":", email)
needRestart = false
} else if strings.Contains(err1.Error(), fmt.Sprintf("User %s not found.", email)) {
logger.Debug("User is already deleted. Nothing to do more...")
} else {
logger.Debug("Error in deleting client on", rt.Name(), ":", err1)
needRestart = true
}
} else {
// Remote: settings already mutated above; one UpdateInbound
// ships the post-deletion state to the node.
if err1 := rt.UpdateInbound(context.Background(), oldInbound, oldInbound); err1 != nil {
return false, err1
}
}
}
}
return needRestart, db.Save(oldInbound).Error
}
func (s *InboundService) UpdateInboundClient(data *model.Inbound, clientId string) (bool, error) {
// TODO: check if TrafficReset field is updating
clients, err := s.GetClients(data)
if err != nil {
return false, err
}
var settings map[string]any
err = json.Unmarshal([]byte(data.Settings), &settings)
if err != nil {
return false, err
}
interfaceClients := settings["clients"].([]any)
oldInbound, err := s.GetInbound(data.Id)
if err != nil {
return false, err
}
oldClients, err := s.GetClients(oldInbound)
if err != nil {
return false, err
}
oldEmail := ""
newClientId := ""
clientIndex := -1
for index, oldClient := range oldClients {
oldClientId := ""
switch oldInbound.Protocol {
case "trojan":
oldClientId = oldClient.Password
newClientId = clients[0].Password
case "shadowsocks":
oldClientId = oldClient.Email
newClientId = clients[0].Email
case "hysteria", "hysteria2":
oldClientId = oldClient.Auth
newClientId = clients[0].Auth
default:
oldClientId = oldClient.ID
newClientId = clients[0].ID
}
if clientId == oldClientId {
oldEmail = oldClient.Email
clientIndex = index
break
}
}
// Validate new client ID
if newClientId == "" || clientIndex == -1 {
return false, common.NewError("empty client ID")
}
if len(clients[0].Email) > 0 && clients[0].Email != oldEmail {
existEmail, err := s.checkEmailsExistForClients(clients)
if err != nil {
return false, err
}
if existEmail != "" {
return false, common.NewError("Duplicate email:", existEmail)
}
}
var oldSettings map[string]any
err = json.Unmarshal([]byte(oldInbound.Settings), &oldSettings)
if err != nil {
return false, err
}
settingsClients := oldSettings["clients"].([]any)
// Preserve created_at and set updated_at for the replacing client
var preservedCreated any
if clientIndex >= 0 && clientIndex < len(settingsClients) {
if oldMap, ok := settingsClients[clientIndex].(map[string]any); ok {
if v, ok2 := oldMap["created_at"]; ok2 {
preservedCreated = v
}
}
}
if len(interfaceClients) > 0 {
if newMap, ok := interfaceClients[0].(map[string]any); ok {
if preservedCreated == nil {
preservedCreated = time.Now().Unix() * 1000
}
newMap["created_at"] = preservedCreated
newMap["updated_at"] = time.Now().Unix() * 1000
interfaceClients[0] = newMap
}
}
settingsClients[clientIndex] = interfaceClients[0]
oldSettings["clients"] = settingsClients
// testseed is only meaningful when at least one VLESS client uses the exact
// xtls-rprx-vision flow. The client-edit path only rewrites a single client,
// so re-check the flow set here and strip a stale testseed when nothing in the
// inbound still warrants it. The full-inbound update path already handles this
// on the JS side via VLESSSettings.toJson().
if oldInbound.Protocol == model.VLESS {
hasVisionFlow := false
for _, c := range settingsClients {
cm, ok := c.(map[string]any)
if !ok {
continue
}
if flow, _ := cm["flow"].(string); flow == "xtls-rprx-vision" {
hasVisionFlow = true
break
}
}
if !hasVisionFlow {
delete(oldSettings, "testseed")
}
}
newSettings, err := json.MarshalIndent(oldSettings, "", " ")
if err != nil {
return false, err
}
oldInbound.Settings = string(newSettings)
db := database.GetDB()
tx := db.Begin()
defer func() {
if err != nil {
tx.Rollback()
} else {
tx.Commit()
}
}()
if len(clients[0].Email) > 0 {
if len(oldEmail) > 0 {
// Repointing onto an email that already has a row would collide on
// the unique constraint, so retire the donor and let the surviving
// row carry the merged identity.
emailUnchanged := strings.EqualFold(oldEmail, clients[0].Email)
targetExists := int64(0)
if !emailUnchanged {
if err = tx.Model(xray.ClientTraffic{}).Where("email = ?", clients[0].Email).Count(&targetExists).Error; err != nil {
return false, err
}
}
if emailUnchanged || targetExists == 0 {
err = s.UpdateClientStat(tx, oldEmail, &clients[0])
if err != nil {
return false, err
}
err = s.UpdateClientIPs(tx, oldEmail, clients[0].Email)
if err != nil {
return false, err
}
} else {
stillUsed, sErr := s.emailUsedByOtherInbounds(oldEmail, data.Id)
if sErr != nil {
return false, sErr
}
if !stillUsed {
if err = s.DelClientStat(tx, oldEmail); err != nil {
return false, err
}
if err = s.DelClientIPs(tx, oldEmail); err != nil {
return false, err
}
}
// Refresh the surviving row with the new client's limits/expiry.
if err = s.UpdateClientStat(tx, clients[0].Email, &clients[0]); err != nil {
return false, err
}
}
} else {
s.AddClientStat(tx, data.Id, &clients[0])
}
} else {
stillUsed, err := s.emailUsedByOtherInbounds(oldEmail, data.Id)
if err != nil {
return false, err
}
if !stillUsed {
err = s.DelClientStat(tx, oldEmail)
if err != nil {
return false, err
}
err = s.DelClientIPs(tx, oldEmail)
if err != nil {
return false, err
}
}
}
needRestart := false
if len(oldEmail) > 0 {
rt, rterr := s.runtimeFor(oldInbound)
if rterr != nil {
if oldInbound.NodeID != nil {
err = rterr
return false, err
}
needRestart = true
} else if oldInbound.NodeID == nil {
// Local: paired Remove+Add on the live xray, keeping other
// clients online (full-restart fallback on partial failure).
if oldClients[clientIndex].Enable {
err1 := rt.RemoveUser(context.Background(), oldInbound, oldEmail)
if err1 == nil {
logger.Debug("Old client deleted on", rt.Name(), ":", oldEmail)
} else if strings.Contains(err1.Error(), fmt.Sprintf("User %s not found.", oldEmail)) {
logger.Debug("User is already deleted. Nothing to do more...")
} else {
logger.Debug("Error in deleting client on", rt.Name(), ":", err1)
needRestart = true
}
}
if clients[0].Enable {
cipher := ""
if oldInbound.Protocol == "shadowsocks" {
cipher = oldSettings["method"].(string)
}
err1 := rt.AddUser(context.Background(), oldInbound, map[string]any{
"email": clients[0].Email,
"id": clients[0].ID,
"security": clients[0].Security,
"flow": clients[0].Flow,
"auth": clients[0].Auth,
"password": clients[0].Password,
"cipher": cipher,
})
if err1 == nil {
logger.Debug("Client edited on", rt.Name(), ":", clients[0].Email)
} else {
logger.Debug("Error in adding client on", rt.Name(), ":", err1)
needRestart = true
}
}
} else {
// Remote: settings already mutated; one UpdateInbound suffices.
if err1 := rt.UpdateInbound(context.Background(), oldInbound, oldInbound); err1 != nil {
err = err1
return false, err
}
}
} else {
logger.Debug("Client old email not found")
needRestart = true
}
return needRestart, tx.Save(oldInbound).Error
}
// resetGracePeriodMs is the window after a reset during which incoming
// traffic snapshots from the node are ignored if they would resurrect
// non-zero counters. Three sync ticks (10s each) is enough headroom for
// the central → node reset HTTP call to land before the next pull.
const resetGracePeriodMs int64 = 30000
// SetRemoteTraffic merges absolute counters from a remote node into the
// central DB. Unlike AddTraffic, which adds deltas pulled from the local
// xray gRPC stats endpoint, this SETs the values — the node already has
// the canonical absolute value and we just mirror it.
//
// Rows in the post-reset grace window are skipped if the snapshot would
// regress them, so a user-initiated reset survives until the propagation
// HTTP call has completed on the node. After the grace window expires
// the snapshot wins regardless (the node is authoritative for the
// inbounds it hosts).
func (s *InboundService) SetRemoteTraffic(nodeID int, snap *runtime.TrafficSnapshot) error {
if snap == nil || nodeID <= 0 {
return nil
}
db := database.GetDB()
now := time.Now().UnixMilli()
// Load central inbounds for this node so we can resolve tag→id and
// honour the per-inbound grace window. One query covers every row
// touched in this tick.
var central []model.Inbound
if err := db.Model(model.Inbound{}).
Where("node_id = ?", nodeID).
Find(&central).Error; err != nil {
return err
}
tagToCentral := make(map[string]*model.Inbound, len(central))
for i := range central {
tagToCentral[central[i].Tag] = &central[i]
}
tx := db.Begin()
committed := false
defer func() {
if !committed {
tx.Rollback()
}
}()
// Per-inbound counter merge. Skip rows whose central allTime is
// suspiciously lower than the snapshot AND we're inside the grace
// window — that's the "reset hit central but not the node yet"
// pattern we want to defer until next tick.
for _, snapIb := range snap.Inbounds {
if snapIb == nil {
continue
}
c, ok := tagToCentral[snapIb.Tag]
if !ok {
continue // node has an inbound the central doesn't know about — ignore
}
snapAllTime := snapIb.AllTime
if snapAllTime == 0 {
snapAllTime = snapIb.Up + snapIb.Down
}
inGrace := c.LastTrafficResetTime > 0 && now-c.LastTrafficResetTime < resetGracePeriodMs
if inGrace && snapAllTime > c.AllTime {
logger.Debug("SetRemoteTraffic: skipping inbound", c.Id, "in reset grace window")
continue
}
if err := tx.Model(model.Inbound{}).
Where("id = ?", c.Id).
Updates(map[string]any{
"up": snapIb.Up,
"down": snapIb.Down,
"all_time": snapAllTime,
}).Error; err != nil {
return err
}
}
// Per-client merge. The snapshot's ClientStats are nested under
// each Inbound, so flatten before walking. Each client_traffics row
// is keyed by (inbound_id, email) — we resolve inbound_id from the
// central inbound row matched above.
for _, snapIb := range snap.Inbounds {
if snapIb == nil {
continue
}
c, ok := tagToCentral[snapIb.Tag]
if !ok {
continue
}
// Honour the same grace window for client rows: if the parent
// inbound was just reset, leave its clients alone too.
inGrace := c.LastTrafficResetTime > 0 && now-c.LastTrafficResetTime < resetGracePeriodMs
for _, cs := range snapIb.ClientStats {
snapAllTime := cs.AllTime
if snapAllTime == 0 {
snapAllTime = cs.Up + cs.Down
}
if inGrace {
// Skip client rows whose snapshot would push counters
// back up; allow rows that are zero on the node side
// (those are normal — node was reset alongside central).
if snapAllTime > 0 {
continue
}
}
// MAX(last_online, ?) so a momentary clock skew on the node
// can't regress the central row's last-seen timestamp.
if err := tx.Exec(
`UPDATE client_traffics
SET up = ?, down = ?, all_time = ?, last_online = MAX(last_online, ?)
WHERE inbound_id = ? AND email = ?`,
cs.Up, cs.Down, snapAllTime, cs.LastOnline, c.Id, cs.Email,
).Error; err != nil {
return err
}
}
}
if err := tx.Commit().Error; err != nil {
return err
}
committed = true
// Push the node's online-clients contribution into xray.Process so
// GetOnlineClients returns the union of local + every node. Empty
// list still calls Set so a node that just had everyone disconnect
// updates promptly.
if p != nil {
p.SetNodeOnlineClients(nodeID, snap.OnlineEmails)
}
return nil
}
func (s *InboundService) AddTraffic(inboundTraffics []*xray.Traffic, clientTraffics []*xray.ClientTraffic) (bool, bool, error) {
var err error
db := database.GetDB()
tx := db.Begin()
defer func() {
if err != nil {
tx.Rollback()
} else {
tx.Commit()
}
}()
err = s.addInboundTraffic(tx, inboundTraffics)
if err != nil {
return false, false, err
}
err = s.addClientTraffic(tx, clientTraffics)
if err != nil {
return false, false, err
}
needRestart0, count, err := s.autoRenewClients(tx)
if err != nil {
logger.Warning("Error in renew clients:", err)
} else if count > 0 {
logger.Debugf("%v clients renewed", count)
}
disabledClientsCount := int64(0)
needRestart1, count, err := s.disableInvalidClients(tx)
if err != nil {
logger.Warning("Error in disabling invalid clients:", err)
} else if count > 0 {
logger.Debugf("%v clients disabled", count)
disabledClientsCount = count
}
needRestart2, count, err := s.disableInvalidInbounds(tx)
if err != nil {
logger.Warning("Error in disabling invalid inbounds:", err)
} else if count > 0 {
logger.Debugf("%v inbounds disabled", count)
}
return needRestart0 || needRestart1 || needRestart2, disabledClientsCount > 0, nil
}
func (s *InboundService) addInboundTraffic(tx *gorm.DB, traffics []*xray.Traffic) error {
if len(traffics) == 0 {
return nil
}
var err error
for _, traffic := range traffics {
if traffic.IsInbound {
err = tx.Model(&model.Inbound{}).Where("tag = ?", traffic.Tag).
Updates(map[string]any{
"up": gorm.Expr("up + ?", traffic.Up),
"down": gorm.Expr("down + ?", traffic.Down),
"all_time": gorm.Expr("COALESCE(all_time, 0) + ?", traffic.Up+traffic.Down),
}).Error
if err != nil {
return err
}
}
}
return nil
}
func (s *InboundService) addClientTraffic(tx *gorm.DB, traffics []*xray.ClientTraffic) (err error) {
if len(traffics) == 0 {
// Empty onlineUsers
if p != nil {
p.SetOnlineClients(make([]string, 0))
}
return nil
}
onlineClients := make([]string, 0)
emails := make([]string, 0, len(traffics))
for _, traffic := range traffics {
emails = append(emails, traffic.Email)
}
dbClientTraffics := make([]*xray.ClientTraffic, 0, len(traffics))
err = tx.Model(xray.ClientTraffic{}).Where("email IN (?)", emails).Find(&dbClientTraffics).Error
if err != nil {
return err
}
// Avoid empty slice error
if len(dbClientTraffics) == 0 {
return nil
}
dbClientTraffics, err = s.adjustTraffics(tx, dbClientTraffics)
if err != nil {
return err
}
// Index by email for O(N) merge — the previous nested loop was O(N²)
// and dominated each cron tick on inbounds with thousands of active
// clients (7500 × 7500 = 56M string comparisons every 10 seconds).
trafficByEmail := make(map[string]*xray.ClientTraffic, len(traffics))
for i := range traffics {
if traffics[i] != nil {
trafficByEmail[traffics[i].Email] = traffics[i]
}
}
now := time.Now().UnixMilli()
for dbTraffic_index := range dbClientTraffics {
t, ok := trafficByEmail[dbClientTraffics[dbTraffic_index].Email]
if !ok {
continue
}
dbClientTraffics[dbTraffic_index].Up += t.Up
dbClientTraffics[dbTraffic_index].Down += t.Down
dbClientTraffics[dbTraffic_index].AllTime += t.Up + t.Down
if t.Up+t.Down > 0 {
onlineClients = append(onlineClients, t.Email)
dbClientTraffics[dbTraffic_index].LastOnline = now
}
}
// Set onlineUsers
p.SetOnlineClients(onlineClients)
err = tx.Save(dbClientTraffics).Error
if err != nil {
logger.Warning("AddClientTraffic update data ", err)
}
return nil
}
func (s *InboundService) adjustTraffics(tx *gorm.DB, dbClientTraffics []*xray.ClientTraffic) ([]*xray.ClientTraffic, error) {
inboundIds := make([]int, 0, len(dbClientTraffics))
for _, dbClientTraffic := range dbClientTraffics {
if dbClientTraffic.ExpiryTime < 0 {
inboundIds = append(inboundIds, dbClientTraffic.InboundId)
}
}
if len(inboundIds) > 0 {
var inbounds []*model.Inbound
err := tx.Model(model.Inbound{}).Where("id IN (?)", inboundIds).Find(&inbounds).Error
if err != nil {
return nil, err
}
for inbound_index := range inbounds {
settings := map[string]any{}
json.Unmarshal([]byte(inbounds[inbound_index].Settings), &settings)
clients, ok := settings["clients"].([]any)
if ok {
var newClients []any
for client_index := range clients {
c := clients[client_index].(map[string]any)
for traffic_index := range dbClientTraffics {
if dbClientTraffics[traffic_index].ExpiryTime < 0 && c["email"] == dbClientTraffics[traffic_index].Email {
oldExpiryTime := c["expiryTime"].(float64)
newExpiryTime := (time.Now().Unix() * 1000) - int64(oldExpiryTime)
c["expiryTime"] = newExpiryTime
c["updated_at"] = time.Now().Unix() * 1000
dbClientTraffics[traffic_index].ExpiryTime = newExpiryTime
break
}
}
// Backfill created_at and updated_at
if _, ok := c["created_at"]; !ok {
c["created_at"] = time.Now().Unix() * 1000
}
c["updated_at"] = time.Now().Unix() * 1000
newClients = append(newClients, any(c))
}
settings["clients"] = newClients
modifiedSettings, err := json.MarshalIndent(settings, "", " ")
if err != nil {
return nil, err
}
inbounds[inbound_index].Settings = string(modifiedSettings)
}
}
err = tx.Save(inbounds).Error
if err != nil {
logger.Warning("AddClientTraffic update inbounds ", err)
logger.Error(inbounds)
}
}
return dbClientTraffics, nil
}
func (s *InboundService) autoRenewClients(tx *gorm.DB) (bool, int64, error) {
// check for time expired
var traffics []*xray.ClientTraffic
now := time.Now().Unix() * 1000
var err, err1 error
err = tx.Model(xray.ClientTraffic{}).Where("reset > 0 and expiry_time > 0 and expiry_time <= ?", now).Find(&traffics).Error
if err != nil {
return false, 0, err
}
// return if there is no client to renew
if len(traffics) == 0 {
return false, 0, nil
}
var inbound_ids []int
var inbounds []*model.Inbound
needRestart := false
var clientsToAdd []struct {
protocol string
tag string
client map[string]any
}
for _, traffic := range traffics {
inbound_ids = append(inbound_ids, traffic.InboundId)
}
// Dedupe so an inbound hosting N expired clients is fetched and saved once
// per tick instead of N times across chunk boundaries.
inbound_ids = uniqueInts(inbound_ids)
// Chunked to stay under SQLite's bind-variable limit when many inbounds
// are touched in a single tick.
for _, batch := range chunkInts(inbound_ids, sqliteMaxVars) {
var page []*model.Inbound
if err = tx.Model(model.Inbound{}).Where("id IN ?", batch).Find(&page).Error; err != nil {
return false, 0, err
}
inbounds = append(inbounds, page...)
}
for inbound_index := range inbounds {
settings := map[string]any{}
json.Unmarshal([]byte(inbounds[inbound_index].Settings), &settings)
clients := settings["clients"].([]any)
for client_index := range clients {
c := clients[client_index].(map[string]any)
for traffic_index, traffic := range traffics {
if traffic.Email == c["email"].(string) {
newExpiryTime := traffic.ExpiryTime
for newExpiryTime < now {
newExpiryTime += (int64(traffic.Reset) * 86400000)
}
c["expiryTime"] = newExpiryTime
traffics[traffic_index].ExpiryTime = newExpiryTime
traffics[traffic_index].Down = 0
traffics[traffic_index].Up = 0
if !traffic.Enable {
traffics[traffic_index].Enable = true
clientsToAdd = append(clientsToAdd,
struct {
protocol string
tag string
client map[string]any
}{
protocol: string(inbounds[inbound_index].Protocol),
tag: inbounds[inbound_index].Tag,
client: c,
})
}
clients[client_index] = any(c)
break
}
}
}
settings["clients"] = clients
newSettings, err := json.MarshalIndent(settings, "", " ")
if err != nil {
return false, 0, err
}
inbounds[inbound_index].Settings = string(newSettings)
}
err = tx.Save(inbounds).Error
if err != nil {
return false, 0, err
}
err = tx.Save(traffics).Error
if err != nil {
return false, 0, err
}
if p != nil {
err1 = s.xrayApi.Init(p.GetAPIPort())
if err1 != nil {
return true, int64(len(traffics)), nil
}
for _, clientToAdd := range clientsToAdd {
err1 = s.xrayApi.AddUser(clientToAdd.protocol, clientToAdd.tag, clientToAdd.client)
if err1 != nil {
needRestart = true
}
}
s.xrayApi.Close()
}
return needRestart, int64(len(traffics)), nil
}
func (s *InboundService) disableInvalidInbounds(tx *gorm.DB) (bool, int64, error) {
now := time.Now().Unix() * 1000
needRestart := false
if p != nil {
var tags []string
err := tx.Table("inbounds").
Select("inbounds.tag").
Where("((total > 0 and up + down >= total) or (expiry_time > 0 and expiry_time <= ?)) and enable = ?", now, true).
Scan(&tags).Error
if err != nil {
return false, 0, err
}
s.xrayApi.Init(p.GetAPIPort())
for _, tag := range tags {
err1 := s.xrayApi.DelInbound(tag)
if err1 == nil {
logger.Debug("Inbound disabled by api:", tag)
} else {
logger.Debug("Error in disabling inbound by api:", err1)
needRestart = true
}
}
s.xrayApi.Close()
}
result := tx.Model(model.Inbound{}).
Where("((total > 0 and up + down >= total) or (expiry_time > 0 and expiry_time <= ?)) and enable = ?", now, true).
Update("enable", false)
err := result.Error
count := result.RowsAffected
return needRestart, count, err
}
func (s *InboundService) disableInvalidClients(tx *gorm.DB) (bool, int64, error) {
now := time.Now().Unix() * 1000
needRestart := false
var depletedRows []xray.ClientTraffic
err := tx.Model(xray.ClientTraffic{}).
Where("((total > 0 AND up + down >= total) OR (expiry_time > 0 AND expiry_time <= ?)) AND enable = ?", now, true).
Find(&depletedRows).Error
if err != nil {
return false, 0, err
}
if len(depletedRows) == 0 {
return false, 0, nil
}
rowByEmail := make(map[string]*xray.ClientTraffic, len(depletedRows))
depletedEmails := make([]string, 0, len(depletedRows))
for i := range depletedRows {
if depletedRows[i].Email == "" {
continue
}
rowByEmail[strings.ToLower(depletedRows[i].Email)] = &depletedRows[i]
depletedEmails = append(depletedEmails, depletedRows[i].Email)
}
// Resolve inbound membership only for the depleted emails — pushing the
// filter into SQLite avoids dragging every panel client through Go for
// the common case where most clients are healthy.
var memberships []struct {
InboundId int
Tag string
Email string
SubID string `gorm:"column:sub_id"`
}
if len(depletedEmails) > 0 {
err = tx.Raw(`
SELECT inbounds.id AS inbound_id,
inbounds.tag AS tag,
JSON_EXTRACT(client.value, '$.email') AS email,
JSON_EXTRACT(client.value, '$.subId') AS sub_id
FROM inbounds,
JSON_EACH(JSON_EXTRACT(inbounds.settings, '$.clients')) AS client
WHERE LOWER(JSON_EXTRACT(client.value, '$.email')) IN ?
`, lowerAll(depletedEmails)).Scan(&memberships).Error
if err != nil {
return false, 0, err
}
}
// Discover the row holder's subId per email. Only siblings sharing it
// get cascaded; legacy data where two identities reuse the same email
// stays isolated to the row owner.
holderSub := make(map[string]string, len(rowByEmail))
for _, m := range memberships {
email := strings.ToLower(strings.Trim(m.Email, "\""))
row, ok := rowByEmail[email]
if !ok || m.InboundId != row.InboundId {
continue
}
holderSub[email] = strings.Trim(m.SubID, "\"")
}
type target struct {
InboundId int
Tag string
Email string
}
var targets []target
for _, m := range memberships {
email := strings.ToLower(strings.Trim(m.Email, "\""))
row, ok := rowByEmail[email]
if !ok {
continue
}
expected, hasSub := holderSub[email]
mSub := strings.Trim(m.SubID, "\"")
switch {
case !hasSub || expected == "":
if m.InboundId != row.InboundId {
continue
}
case mSub != expected:
continue
}
targets = append(targets, target{
InboundId: m.InboundId,
Tag: m.Tag,
Email: strings.Trim(m.Email, "\""),
})
}
if p != nil && len(targets) > 0 {
s.xrayApi.Init(p.GetAPIPort())
for _, t := range targets {
err1 := s.xrayApi.RemoveUser(t.Tag, t.Email)
if err1 == nil {
logger.Debug("Client disabled by api:", t.Email)
} else if strings.Contains(err1.Error(), fmt.Sprintf("User %s not found.", t.Email)) {
logger.Debug("User is already disabled. Nothing to do more...")
} else {
logger.Debug("Error in disabling client by api:", err1)
needRestart = true
}
}
s.xrayApi.Close()
}
result := tx.Model(xray.ClientTraffic{}).
Where("((total > 0 and up + down >= total) or (expiry_time > 0 and expiry_time <= ?)) and enable = ?", now, true).
Update("enable", false)
err = result.Error
count := result.RowsAffected
if err != nil {
return needRestart, count, err
}
if len(targets) == 0 {
return needRestart, count, nil
}
// Mirror enable=false + the row's authoritative quota/expiry into every
// (inbound, email) we just removed via the API.
inboundEmailMap := make(map[int]map[string]struct{})
for _, t := range targets {
if inboundEmailMap[t.InboundId] == nil {
inboundEmailMap[t.InboundId] = make(map[string]struct{})
}
inboundEmailMap[t.InboundId][t.Email] = struct{}{}
}
inboundIds := make([]int, 0, len(inboundEmailMap))
for id := range inboundEmailMap {
inboundIds = append(inboundIds, id)
}
var inbounds []*model.Inbound
if err = tx.Model(model.Inbound{}).Where("id IN ?", inboundIds).Find(&inbounds).Error; err != nil {
logger.Warning("disableInvalidClients fetch inbounds:", err)
return needRestart, count, nil
}
dirty := make([]*model.Inbound, 0, len(inbounds))
for _, inbound := range inbounds {
settings := map[string]any{}
if jsonErr := json.Unmarshal([]byte(inbound.Settings), &settings); jsonErr != nil {
continue
}
clientsRaw, ok := settings["clients"].([]any)
if !ok {
continue
}
emailSet := inboundEmailMap[inbound.Id]
changed := false
for i := range clientsRaw {
c, ok := clientsRaw[i].(map[string]any)
if !ok {
continue
}
email, _ := c["email"].(string)
if _, shouldDisable := emailSet[email]; !shouldDisable {
continue
}
c["enable"] = false
if row, ok := rowByEmail[strings.ToLower(email)]; ok {
c["totalGB"] = row.Total
c["expiryTime"] = row.ExpiryTime
}
c["updated_at"] = now
clientsRaw[i] = c
changed = true
}
if !changed {
continue
}
settings["clients"] = clientsRaw
modifiedSettings, jsonErr := json.MarshalIndent(settings, "", " ")
if jsonErr != nil {
continue
}
inbound.Settings = string(modifiedSettings)
dirty = append(dirty, inbound)
}
if len(dirty) > 0 {
if err = tx.Save(dirty).Error; err != nil {
logger.Warning("disableInvalidClients update inbound settings:", err)
}
}
return needRestart, count, nil
}
func (s *InboundService) GetInboundTags() (string, error) {
db := database.GetDB()
var inboundTags []string
err := db.Model(model.Inbound{}).Select("tag").Find(&inboundTags).Error
if err != nil && err != gorm.ErrRecordNotFound {
return "", err
}
tags, _ := json.Marshal(inboundTags)
return string(tags), nil
}
func (s *InboundService) GetClientReverseTags() (string, error) {
db := database.GetDB()
var inbounds []model.Inbound
err := db.Model(model.Inbound{}).Select("settings").Where("protocol = ?", "vless").Find(&inbounds).Error
if err != nil && err != gorm.ErrRecordNotFound {
return "[]", err
}
tagSet := make(map[string]struct{})
for _, inbound := range inbounds {
var settings map[string]any
if err := json.Unmarshal([]byte(inbound.Settings), &settings); err != nil {
continue
}
clients, ok := settings["clients"].([]any)
if !ok {
continue
}
for _, client := range clients {
clientMap, ok := client.(map[string]any)
if !ok {
continue
}
reverse, ok := clientMap["reverse"].(map[string]any)
if !ok {
continue
}
tag, _ := reverse["tag"].(string)
tag = strings.TrimSpace(tag)
if tag != "" {
tagSet[tag] = struct{}{}
}
}
}
rawTags := make([]string, 0, len(tagSet))
for tag := range tagSet {
rawTags = append(rawTags, tag)
}
sort.Strings(rawTags)
result, _ := json.Marshal(rawTags)
return string(result), nil
}
func (s *InboundService) MigrationRemoveOrphanedTraffics() {
db := database.GetDB()
db.Exec(`
DELETE FROM client_traffics
WHERE email NOT IN (
SELECT JSON_EXTRACT(client.value, '$.email')
FROM inbounds,
JSON_EACH(JSON_EXTRACT(inbounds.settings, '$.clients')) AS client
)
`)
}
// AddClientStat inserts a per-client accounting row, no-op on email
// conflict. Xray reports traffic per email, so the surviving row acts as
// the shared accumulator for inbounds that re-use the same identity.
func (s *InboundService) AddClientStat(tx *gorm.DB, inboundId int, client *model.Client) error {
clientTraffic := xray.ClientTraffic{
InboundId: inboundId,
Email: client.Email,
Total: client.TotalGB,
ExpiryTime: client.ExpiryTime,
Enable: client.Enable,
Reset: client.Reset,
}
return tx.Clauses(clause.OnConflict{Columns: []clause.Column{{Name: "email"}}, DoNothing: true}).
Create(&clientTraffic).Error
}
func (s *InboundService) UpdateClientStat(tx *gorm.DB, email string, client *model.Client) error {
result := tx.Model(xray.ClientTraffic{}).
Where("email = ?", email).
Updates(map[string]any{
"enable": client.Enable,
"email": client.Email,
"total": client.TotalGB,
"expiry_time": client.ExpiryTime,
"reset": client.Reset,
})
err := result.Error
return err
}
func (s *InboundService) UpdateClientIPs(tx *gorm.DB, oldEmail string, newEmail string) error {
return tx.Model(model.InboundClientIps{}).Where("client_email = ?", oldEmail).Update("client_email", newEmail).Error
}
func (s *InboundService) DelClientStat(tx *gorm.DB, email string) error {
return tx.Where("email = ?", email).Delete(xray.ClientTraffic{}).Error
}
func (s *InboundService) DelClientIPs(tx *gorm.DB, email string) error {
return tx.Where("client_email = ?", email).Delete(model.InboundClientIps{}).Error
}
func (s *InboundService) GetClientInboundByTrafficID(trafficId int) (traffic *xray.ClientTraffic, inbound *model.Inbound, err error) {
db := database.GetDB()
var traffics []*xray.ClientTraffic
err = db.Model(xray.ClientTraffic{}).Where("id = ?", trafficId).Find(&traffics).Error
if err != nil {
logger.Warningf("Error retrieving ClientTraffic with trafficId %d: %v", trafficId, err)
return nil, nil, err
}
if len(traffics) > 0 {
inbound, err = s.GetInbound(traffics[0].InboundId)
return traffics[0], inbound, err
}
return nil, nil, nil
}
func (s *InboundService) GetClientInboundByEmail(email string) (traffic *xray.ClientTraffic, inbound *model.Inbound, err error) {
db := database.GetDB()
var traffics []*xray.ClientTraffic
err = db.Model(xray.ClientTraffic{}).Where("email = ?", email).Find(&traffics).Error
if err != nil {
logger.Warningf("Error retrieving ClientTraffic with email %s: %v", email, err)
return nil, nil, err
}
if len(traffics) > 0 {
inbound, err = s.GetInbound(traffics[0].InboundId)
return traffics[0], inbound, err
}
return nil, nil, nil
}
func (s *InboundService) GetClientByEmail(clientEmail string) (*xray.ClientTraffic, *model.Client, error) {
traffic, inbound, err := s.GetClientInboundByEmail(clientEmail)
if err != nil {
return nil, nil, err
}
if inbound == nil {
return nil, nil, common.NewError("Inbound Not Found For Email:", clientEmail)
}
clients, err := s.GetClients(inbound)
if err != nil {
return nil, nil, err
}
for _, client := range clients {
if client.Email == clientEmail {
return traffic, &client, nil
}
}
return nil, nil, common.NewError("Client Not Found In Inbound For Email:", clientEmail)
}
func (s *InboundService) SetClientTelegramUserID(trafficId int, tgId int64) (bool, error) {
traffic, inbound, err := s.GetClientInboundByTrafficID(trafficId)
if err != nil {
return false, err
}
if inbound == nil {
return false, common.NewError("Inbound Not Found For Traffic ID:", trafficId)
}
clientEmail := traffic.Email
oldClients, err := s.GetClients(inbound)
if err != nil {
return false, err
}
clientId := ""
for _, oldClient := range oldClients {
if oldClient.Email == clientEmail {
switch inbound.Protocol {
case "trojan":
clientId = oldClient.Password
case "shadowsocks":
clientId = oldClient.Email
default:
clientId = oldClient.ID
}
break
}
}
if len(clientId) == 0 {
return false, common.NewError("Client Not Found For Email:", clientEmail)
}
var settings map[string]any
err = json.Unmarshal([]byte(inbound.Settings), &settings)
if err != nil {
return false, err
}
clients := settings["clients"].([]any)
var newClients []any
for client_index := range clients {
c := clients[client_index].(map[string]any)
if c["email"] == clientEmail {
c["tgId"] = tgId
c["updated_at"] = time.Now().Unix() * 1000
newClients = append(newClients, any(c))
}
}
settings["clients"] = newClients
modifiedSettings, err := json.MarshalIndent(settings, "", " ")
if err != nil {
return false, err
}
inbound.Settings = string(modifiedSettings)
needRestart, err := s.UpdateInboundClient(inbound, clientId)
return needRestart, err
}
func (s *InboundService) checkIsEnabledByEmail(clientEmail string) (bool, error) {
_, inbound, err := s.GetClientInboundByEmail(clientEmail)
if err != nil {
return false, err
}
if inbound == nil {
return false, common.NewError("Inbound Not Found For Email:", clientEmail)
}
clients, err := s.GetClients(inbound)
if err != nil {
return false, err
}
isEnable := false
for _, client := range clients {
if client.Email == clientEmail {
isEnable = client.Enable
break
}
}
return isEnable, err
}
func (s *InboundService) ToggleClientEnableByEmail(clientEmail string) (bool, bool, error) {
_, inbound, err := s.GetClientInboundByEmail(clientEmail)
if err != nil {
return false, false, err
}
if inbound == nil {
return false, false, common.NewError("Inbound Not Found For Email:", clientEmail)
}
oldClients, err := s.GetClients(inbound)
if err != nil {
return false, false, err
}
clientId := ""
clientOldEnabled := false
for _, oldClient := range oldClients {
if oldClient.Email == clientEmail {
switch inbound.Protocol {
case "trojan":
clientId = oldClient.Password
case "shadowsocks":
clientId = oldClient.Email
default:
clientId = oldClient.ID
}
clientOldEnabled = oldClient.Enable
break
}
}
if len(clientId) == 0 {
return false, false, common.NewError("Client Not Found For Email:", clientEmail)
}
var settings map[string]any
err = json.Unmarshal([]byte(inbound.Settings), &settings)
if err != nil {
return false, false, err
}
clients := settings["clients"].([]any)
var newClients []any
for client_index := range clients {
c := clients[client_index].(map[string]any)
if c["email"] == clientEmail {
c["enable"] = !clientOldEnabled
c["updated_at"] = time.Now().Unix() * 1000
newClients = append(newClients, any(c))
}
}
settings["clients"] = newClients
modifiedSettings, err := json.MarshalIndent(settings, "", " ")
if err != nil {
return false, false, err
}
inbound.Settings = string(modifiedSettings)
needRestart, err := s.UpdateInboundClient(inbound, clientId)
if err != nil {
return false, needRestart, err
}
return !clientOldEnabled, needRestart, nil
}
// SetClientEnableByEmail sets client enable state to desired value; returns (changed, needRestart, error)
func (s *InboundService) SetClientEnableByEmail(clientEmail string, enable bool) (bool, bool, error) {
current, err := s.checkIsEnabledByEmail(clientEmail)
if err != nil {
return false, false, err
}
if current == enable {
return false, false, nil
}
newEnabled, needRestart, err := s.ToggleClientEnableByEmail(clientEmail)
if err != nil {
return false, needRestart, err
}
return newEnabled == enable, needRestart, nil
}
func (s *InboundService) ResetClientIpLimitByEmail(clientEmail string, count int) (bool, error) {
_, inbound, err := s.GetClientInboundByEmail(clientEmail)
if err != nil {
return false, err
}
if inbound == nil {
return false, common.NewError("Inbound Not Found For Email:", clientEmail)
}
oldClients, err := s.GetClients(inbound)
if err != nil {
return false, err
}
clientId := ""
for _, oldClient := range oldClients {
if oldClient.Email == clientEmail {
switch inbound.Protocol {
case "trojan":
clientId = oldClient.Password
case "shadowsocks":
clientId = oldClient.Email
default:
clientId = oldClient.ID
}
break
}
}
if len(clientId) == 0 {
return false, common.NewError("Client Not Found For Email:", clientEmail)
}
var settings map[string]any
err = json.Unmarshal([]byte(inbound.Settings), &settings)
if err != nil {
return false, err
}
clients := settings["clients"].([]any)
var newClients []any
for client_index := range clients {
c := clients[client_index].(map[string]any)
if c["email"] == clientEmail {
c["limitIp"] = count
c["updated_at"] = time.Now().Unix() * 1000
newClients = append(newClients, any(c))
}
}
settings["clients"] = newClients
modifiedSettings, err := json.MarshalIndent(settings, "", " ")
if err != nil {
return false, err
}
inbound.Settings = string(modifiedSettings)
needRestart, err := s.UpdateInboundClient(inbound, clientId)
return needRestart, err
}
func (s *InboundService) ResetClientExpiryTimeByEmail(clientEmail string, expiry_time int64) (bool, error) {
_, inbound, err := s.GetClientInboundByEmail(clientEmail)
if err != nil {
return false, err
}
if inbound == nil {
return false, common.NewError("Inbound Not Found For Email:", clientEmail)
}
oldClients, err := s.GetClients(inbound)
if err != nil {
return false, err
}
clientId := ""
for _, oldClient := range oldClients {
if oldClient.Email == clientEmail {
switch inbound.Protocol {
case "trojan":
clientId = oldClient.Password
case "shadowsocks":
clientId = oldClient.Email
default:
clientId = oldClient.ID
}
break
}
}
if len(clientId) == 0 {
return false, common.NewError("Client Not Found For Email:", clientEmail)
}
var settings map[string]any
err = json.Unmarshal([]byte(inbound.Settings), &settings)
if err != nil {
return false, err
}
clients := settings["clients"].([]any)
var newClients []any
for client_index := range clients {
c := clients[client_index].(map[string]any)
if c["email"] == clientEmail {
c["expiryTime"] = expiry_time
c["updated_at"] = time.Now().Unix() * 1000
newClients = append(newClients, any(c))
}
}
settings["clients"] = newClients
modifiedSettings, err := json.MarshalIndent(settings, "", " ")
if err != nil {
return false, err
}
inbound.Settings = string(modifiedSettings)
needRestart, err := s.UpdateInboundClient(inbound, clientId)
return needRestart, err
}
func (s *InboundService) ResetClientTrafficLimitByEmail(clientEmail string, totalGB int) (bool, error) {
if totalGB < 0 {
return false, common.NewError("totalGB must be >= 0")
}
_, inbound, err := s.GetClientInboundByEmail(clientEmail)
if err != nil {
return false, err
}
if inbound == nil {
return false, common.NewError("Inbound Not Found For Email:", clientEmail)
}
oldClients, err := s.GetClients(inbound)
if err != nil {
return false, err
}
clientId := ""
for _, oldClient := range oldClients {
if oldClient.Email == clientEmail {
switch inbound.Protocol {
case "trojan":
clientId = oldClient.Password
case "shadowsocks":
clientId = oldClient.Email
default:
clientId = oldClient.ID
}
break
}
}
if len(clientId) == 0 {
return false, common.NewError("Client Not Found For Email:", clientEmail)
}
var settings map[string]any
err = json.Unmarshal([]byte(inbound.Settings), &settings)
if err != nil {
return false, err
}
clients := settings["clients"].([]any)
var newClients []any
for client_index := range clients {
c := clients[client_index].(map[string]any)
if c["email"] == clientEmail {
c["totalGB"] = totalGB * 1024 * 1024 * 1024
c["updated_at"] = time.Now().Unix() * 1000
newClients = append(newClients, any(c))
}
}
settings["clients"] = newClients
modifiedSettings, err := json.MarshalIndent(settings, "", " ")
if err != nil {
return false, err
}
inbound.Settings = string(modifiedSettings)
needRestart, err := s.UpdateInboundClient(inbound, clientId)
return needRestart, err
}
func (s *InboundService) ResetClientTrafficByEmail(clientEmail string) error {
db := database.GetDB()
// Reset traffic stats in ClientTraffic table
result := db.Model(xray.ClientTraffic{}).
Where("email = ?", clientEmail).
Updates(map[string]any{"enable": true, "up": 0, "down": 0})
err := result.Error
if err != nil {
return err
}
return nil
}
func (s *InboundService) ResetClientTraffic(id int, clientEmail string) (bool, error) {
needRestart := false
traffic, err := s.GetClientTrafficByEmail(clientEmail)
if err != nil {
return false, err
}
if !traffic.Enable {
inbound, err := s.GetInbound(id)
if err != nil {
return false, err
}
clients, err := s.GetClients(inbound)
if err != nil {
return false, err
}
for _, client := range clients {
if client.Email == clientEmail && client.Enable {
rt, rterr := s.runtimeFor(inbound)
if rterr != nil {
if inbound.NodeID != nil {
return false, rterr
}
needRestart = true
break
}
cipher := ""
if string(inbound.Protocol) == "shadowsocks" {
var oldSettings map[string]any
err = json.Unmarshal([]byte(inbound.Settings), &oldSettings)
if err != nil {
return false, err
}
cipher = oldSettings["method"].(string)
}
err1 := rt.AddUser(context.Background(), inbound, map[string]any{
"email": client.Email,
"id": client.ID,
"auth": client.Auth,
"security": client.Security,
"flow": client.Flow,
"password": client.Password,
"cipher": cipher,
})
if err1 == nil {
logger.Debug("Client enabled on", rt.Name(), "due to reset traffic:", clientEmail)
} else {
logger.Debug("Error in enabling client on", rt.Name(), ":", err1)
needRestart = true
}
break
}
}
}
traffic.Up = 0
traffic.Down = 0
traffic.Enable = true
db := database.GetDB()
err = db.Save(traffic).Error
if err != nil {
return false, err
}
// Stamp last_traffic_reset_time on the parent inbound so the next
// NodeTrafficSyncJob tick honours the grace window and doesn't pull
// the pre-reset absolute back from the node.
now := time.Now().UnixMilli()
_ = db.Model(model.Inbound{}).
Where("id = ?", id).
Update("last_traffic_reset_time", now).Error
// Propagate to the remote node if this inbound is node-managed.
// Best-effort: an offline node shouldn't block a user-driven reset
// — the central DB is already zeroed and the next successful sync
// (within the grace window) will re-pull whatever the node has.
inbound, err := s.GetInbound(id)
if err == nil && inbound != nil && inbound.NodeID != nil {
if rt, rterr := s.runtimeFor(inbound); rterr == nil {
if e := rt.ResetClientTraffic(context.Background(), inbound, clientEmail); e != nil {
logger.Warning("ResetClientTraffic: remote propagation to", rt.Name(), "failed:", e)
}
} else {
logger.Warning("ResetClientTraffic: runtime lookup failed:", rterr)
}
}
return needRestart, nil
}
func (s *InboundService) ResetAllClientTraffics(id int) error {
db := database.GetDB()
now := time.Now().Unix() * 1000
if err := db.Transaction(func(tx *gorm.DB) error {
whereText := "inbound_id "
if id == -1 {
whereText += " > ?"
} else {
whereText += " = ?"
}
// Reset client traffics
result := tx.Model(xray.ClientTraffic{}).
Where(whereText, id).
Updates(map[string]any{"enable": true, "up": 0, "down": 0})
if result.Error != nil {
return result.Error
}
// Update lastTrafficResetTime for the inbound(s)
inboundWhereText := "id "
if id == -1 {
inboundWhereText += " > ?"
} else {
inboundWhereText += " = ?"
}
result = tx.Model(model.Inbound{}).
Where(inboundWhereText, id).
Update("last_traffic_reset_time", now)
return result.Error
}); err != nil {
return err
}
// Propagate to remote nodes after the central DB is settled. Single
// inbound: one rt.ResetInboundClientTraffics call. id == -1 (all
// inbounds across panel): walk every node-managed inbound and call
// the per-inbound endpoint — there's no panel-wide endpoint that
// only resets clients without zeroing inbound counters.
var inbounds []model.Inbound
q := db.Model(model.Inbound{}).Where("node_id IS NOT NULL")
if id != -1 {
q = q.Where("id = ?", id)
}
if err := q.Find(&inbounds).Error; err != nil {
// Failed to discover which inbounds to propagate to — central
// DB is already correct, log and move on.
logger.Warning("ResetAllClientTraffics: discover node inbounds failed:", err)
return nil
}
for i := range inbounds {
ib := &inbounds[i]
rt, rterr := s.runtimeFor(ib)
if rterr != nil {
logger.Warning("ResetAllClientTraffics: runtime lookup for inbound", ib.Id, "failed:", rterr)
continue
}
if e := rt.ResetInboundClientTraffics(context.Background(), ib); e != nil {
logger.Warning("ResetAllClientTraffics: remote propagation to", rt.Name(), "failed:", e)
}
}
return nil
}
func (s *InboundService) ResetAllTraffics() error {
db := database.GetDB()
now := time.Now().UnixMilli()
if err := db.Model(model.Inbound{}).
Where("user_id > ?", 0).
Updates(map[string]any{
"up": 0,
"down": 0,
"last_traffic_reset_time": now,
}).Error; err != nil {
return err
}
// Propagate to every node that has at least one inbound on this
// panel. We can't blanket-call rt.ResetAllTraffics because that
// would also zero traffic for inbounds the node hosts but the
// central panel doesn't know about — instead reset per inbound.
var inbounds []model.Inbound
if err := db.Model(model.Inbound{}).
Where("node_id IS NOT NULL").
Find(&inbounds).Error; err != nil {
logger.Warning("ResetAllTraffics: discover node inbounds failed:", err)
return nil
}
for i := range inbounds {
ib := &inbounds[i]
rt, rterr := s.runtimeFor(ib)
if rterr != nil {
logger.Warning("ResetAllTraffics: runtime lookup for inbound", ib.Id, "failed:", rterr)
continue
}
if e := rt.ResetInboundClientTraffics(context.Background(), ib); e != nil {
logger.Warning("ResetAllTraffics: remote propagation to", rt.Name(), "failed:", e)
}
}
return nil
}
func (s *InboundService) ResetInboundTraffic(id int) error {
db := database.GetDB()
result := db.Model(model.Inbound{}).
Where("id = ?", id).
Updates(map[string]any{"up": 0, "down": 0})
return result.Error
}
func (s *InboundService) DelDepletedClients(id int) (err error) {
db := database.GetDB()
tx := db.Begin()
defer func() {
if err == nil {
tx.Commit()
} else {
tx.Rollback()
}
}()
// Collect depleted emails globally — a shared-email row owned by one
// inbound depletes every sibling that lists the email.
now := time.Now().Unix() * 1000
depletedClause := "reset = 0 and ((total > 0 and up + down >= total) or (expiry_time > 0 and expiry_time <= ?))"
var depletedRows []xray.ClientTraffic
err = db.Model(xray.ClientTraffic{}).
Where(depletedClause, now).
Find(&depletedRows).Error
if err != nil {
return err
}
if len(depletedRows) == 0 {
return nil
}
depletedEmails := make(map[string]struct{}, len(depletedRows))
for _, r := range depletedRows {
if r.Email == "" {
continue
}
depletedEmails[strings.ToLower(r.Email)] = struct{}{}
}
if len(depletedEmails) == 0 {
return nil
}
var inbounds []*model.Inbound
inboundQuery := db.Model(model.Inbound{})
if id >= 0 {
inboundQuery = inboundQuery.Where("id = ?", id)
}
if err = inboundQuery.Find(&inbounds).Error; err != nil {
return err
}
for _, inbound := range inbounds {
var settings map[string]any
if err = json.Unmarshal([]byte(inbound.Settings), &settings); err != nil {
return err
}
rawClients, ok := settings["clients"].([]any)
if !ok {
continue
}
newClients := make([]any, 0, len(rawClients))
removed := 0
for _, client := range rawClients {
c, ok := client.(map[string]any)
if !ok {
newClients = append(newClients, client)
continue
}
email, _ := c["email"].(string)
if _, isDepleted := depletedEmails[strings.ToLower(email)]; isDepleted {
removed++
continue
}
newClients = append(newClients, client)
}
if removed == 0 {
continue
}
if len(newClients) == 0 {
s.DelInbound(inbound.Id)
continue
}
settings["clients"] = newClients
ns, mErr := json.MarshalIndent(settings, "", " ")
if mErr != nil {
return mErr
}
inbound.Settings = string(ns)
if err = tx.Save(inbound).Error; err != nil {
return err
}
}
// Drop now-orphaned rows. With id >= 0, a row is safe to drop only when
// no out-of-scope inbound still references the email.
if id < 0 {
err = tx.Where(depletedClause, now).Delete(xray.ClientTraffic{}).Error
return err
}
emails := make([]string, 0, len(depletedEmails))
for e := range depletedEmails {
emails = append(emails, e)
}
var stillReferenced []string
if err = tx.Raw(`
SELECT DISTINCT LOWER(JSON_EXTRACT(client.value, '$.email'))
FROM inbounds,
JSON_EACH(JSON_EXTRACT(inbounds.settings, '$.clients')) AS client
WHERE LOWER(JSON_EXTRACT(client.value, '$.email')) IN ?
`, emails).Scan(&stillReferenced).Error; err != nil {
return err
}
stillSet := make(map[string]struct{}, len(stillReferenced))
for _, e := range stillReferenced {
stillSet[e] = struct{}{}
}
toDelete := make([]string, 0, len(emails))
for _, e := range emails {
if _, kept := stillSet[e]; !kept {
toDelete = append(toDelete, e)
}
}
if len(toDelete) > 0 {
if err = tx.Where("LOWER(email) IN ?", toDelete).Delete(xray.ClientTraffic{}).Error; err != nil {
return err
}
}
return nil
}
func (s *InboundService) GetClientTrafficTgBot(tgId int64) ([]*xray.ClientTraffic, error) {
db := database.GetDB()
var inbounds []*model.Inbound
// Retrieve inbounds where settings contain the given tgId
err := db.Model(model.Inbound{}).Where("settings LIKE ?", fmt.Sprintf(`%%"tgId": %d%%`, tgId)).Find(&inbounds).Error
if err != nil && err != gorm.ErrRecordNotFound {
logger.Errorf("Error retrieving inbounds with tgId %d: %v", tgId, err)
return nil, err
}
var emails []string
for _, inbound := range inbounds {
clients, err := s.GetClients(inbound)
if err != nil {
logger.Errorf("Error retrieving clients for inbound %d: %v", inbound.Id, err)
continue
}
for _, client := range clients {
if client.TgID == tgId {
emails = append(emails, client.Email)
}
}
}
// Chunked to stay under SQLite's bind-variable limit when a single Telegram
// account owns thousands of clients across inbounds.
uniqEmails := uniqueNonEmptyStrings(emails)
traffics := make([]*xray.ClientTraffic, 0, len(uniqEmails))
for _, batch := range chunkStrings(uniqEmails, sqliteMaxVars) {
var page []*xray.ClientTraffic
if err = db.Model(xray.ClientTraffic{}).Where("email IN ?", batch).Find(&page).Error; err != nil {
if err == gorm.ErrRecordNotFound {
continue
}
logger.Errorf("Error retrieving ClientTraffic for emails %v: %v", batch, err)
return nil, err
}
traffics = append(traffics, page...)
}
if len(traffics) == 0 {
logger.Warning("No ClientTraffic records found for emails:", emails)
return nil, nil
}
// Populate UUID and other client data for each traffic record
for i := range traffics {
if ct, client, e := s.GetClientByEmail(traffics[i].Email); e == nil && ct != nil && client != nil {
traffics[i].Enable = client.Enable
traffics[i].UUID = client.ID
traffics[i].SubId = client.SubID
}
}
return traffics, nil
}
// sqliteMaxVars is a safe ceiling for the number of bind parameters in a
// single SQL statement. SQLite's SQLITE_MAX_VARIABLE_NUMBER is 999 on builds
// before 3.32 and 32766 after; staying under 999 keeps queries portable
// across forks/old binaries and also bounds per-query memory on truly large
// installs (>32k clients) where even modern SQLite would refuse a single IN.
const sqliteMaxVars = 900
// uniqueNonEmptyStrings returns a deduplicated copy of in with empty strings
// removed, preserving the order of first occurrence.
func uniqueNonEmptyStrings(in []string) []string {
if len(in) == 0 {
return nil
}
seen := make(map[string]struct{}, len(in))
out := make([]string, 0, len(in))
for _, v := range in {
if v == "" {
continue
}
if _, ok := seen[v]; ok {
continue
}
seen[v] = struct{}{}
out = append(out, v)
}
return out
}
// uniqueInts returns a deduplicated copy of in, preserving order of first occurrence.
func uniqueInts(in []int) []int {
if len(in) == 0 {
return nil
}
seen := make(map[int]struct{}, len(in))
out := make([]int, 0, len(in))
for _, v := range in {
if _, ok := seen[v]; ok {
continue
}
seen[v] = struct{}{}
out = append(out, v)
}
return out
}
// chunkStrings splits s into consecutive sub-slices of at most size elements.
// Returns nil for an empty input or non-positive size.
func chunkStrings(s []string, size int) [][]string {
if size <= 0 || len(s) == 0 {
return nil
}
out := make([][]string, 0, (len(s)+size-1)/size)
for i := 0; i < len(s); i += size {
end := i + size
if end > len(s) {
end = len(s)
}
out = append(out, s[i:end])
}
return out
}
// chunkInts splits s into consecutive sub-slices of at most size elements.
// Returns nil for an empty input or non-positive size.
func chunkInts(s []int, size int) [][]int {
if size <= 0 || len(s) == 0 {
return nil
}
out := make([][]int, 0, (len(s)+size-1)/size)
for i := 0; i < len(s); i += size {
end := i + size
if end > len(s) {
end = len(s)
}
out = append(out, s[i:end])
}
return out
}
// GetActiveClientTraffics returns the absolute ClientTraffic rows for the given
// emails. Used by the WebSocket delta path to push per-client absolute
// counters without re-serializing the full inbound list. The query is chunked
// to stay under SQLite's bind-variable limit on very large active sets.
// Empty input returns (nil, nil).
func (s *InboundService) GetActiveClientTraffics(emails []string) ([]*xray.ClientTraffic, error) {
uniq := uniqueNonEmptyStrings(emails)
if len(uniq) == 0 {
return nil, nil
}
db := database.GetDB()
traffics := make([]*xray.ClientTraffic, 0, len(uniq))
for _, batch := range chunkStrings(uniq, sqliteMaxVars) {
var page []*xray.ClientTraffic
if err := db.Model(xray.ClientTraffic{}).Where("email IN ?", batch).Find(&page).Error; err != nil {
return nil, err
}
traffics = append(traffics, page...)
}
return traffics, nil
}
// InboundTrafficSummary is the minimal projection of an inbound's traffic
// counters used by the WebSocket delta path. Excludes Settings/StreamSettings
// blobs so the broadcast stays compact even with many inbounds.
type InboundTrafficSummary struct {
Id int `json:"id"`
Up int64 `json:"up"`
Down int64 `json:"down"`
Total int64 `json:"total"`
AllTime int64 `json:"allTime"`
Enable bool `json:"enable"`
}
// GetInboundsTrafficSummary returns inbound-level absolute traffic counters
// (no per-client expansion). Companion to GetActiveClientTraffics — together
// they replace the heavy "full inbound list" broadcast on each cron tick.
func (s *InboundService) GetInboundsTrafficSummary() ([]InboundTrafficSummary, error) {
db := database.GetDB()
var summaries []InboundTrafficSummary
if err := db.Model(&model.Inbound{}).
Select("id, up, down, total, all_time, enable").
Find(&summaries).Error; err != nil {
return nil, err
}
return summaries, nil
}
func (s *InboundService) GetClientTrafficByEmail(email string) (traffic *xray.ClientTraffic, err error) {
// Prefer retrieving along with client to reflect actual enabled state from inbound settings
t, client, err := s.GetClientByEmail(email)
if err != nil {
logger.Warningf("Error retrieving ClientTraffic with email %s: %v", email, err)
return nil, err
}
if t != nil && client != nil {
t.UUID = client.ID
t.SubId = client.SubID
return t, nil
}
return nil, nil
}
func (s *InboundService) UpdateClientTrafficByEmail(email string, upload int64, download int64) error {
db := database.GetDB()
// Keep all_time monotonic: it represents historical cumulative usage and
// must never be less than the currently-tracked up+down. Without this,
// the UI showed "Общий трафик" (allTime) below the live consumed value
// after admins manually edited a client's counters.
result := db.Model(xray.ClientTraffic{}).
Where("email = ?", email).
Updates(map[string]any{
"up": upload,
"down": download,
"all_time": gorm.Expr("CASE WHEN COALESCE(all_time, 0) < ? THEN ? ELSE all_time END", upload+download, upload+download),
})
err := result.Error
if err != nil {
logger.Warningf("Error updating ClientTraffic with email %s: %v", email, err)
return err
}
return nil
}
func (s *InboundService) GetClientTrafficByID(id string) ([]xray.ClientTraffic, error) {
db := database.GetDB()
var traffics []xray.ClientTraffic
err := db.Model(xray.ClientTraffic{}).Where(`email IN(
SELECT JSON_EXTRACT(client.value, '$.email') as email
FROM inbounds,
JSON_EACH(JSON_EXTRACT(inbounds.settings, '$.clients')) AS client
WHERE
JSON_EXTRACT(client.value, '$.id') in (?)
)`, id).Find(&traffics).Error
if err != nil {
logger.Debug(err)
return nil, err
}
// Reconcile enable flag with client settings per email to avoid stale DB value
for i := range traffics {
if ct, client, e := s.GetClientByEmail(traffics[i].Email); e == nil && ct != nil && client != nil {
traffics[i].Enable = client.Enable
traffics[i].UUID = client.ID
traffics[i].SubId = client.SubID
}
}
return traffics, err
}
func (s *InboundService) SearchClientTraffic(query string) (traffic *xray.ClientTraffic, err error) {
db := database.GetDB()
inbound := &model.Inbound{}
traffic = &xray.ClientTraffic{}
// Search for inbound settings that contain the query
err = db.Model(model.Inbound{}).Where("settings LIKE ?", "%\""+query+"\"%").First(inbound).Error
if err != nil {
if err == gorm.ErrRecordNotFound {
logger.Warningf("Inbound settings containing query %s not found: %v", query, err)
return nil, err
}
logger.Errorf("Error searching for inbound settings with query %s: %v", query, err)
return nil, err
}
traffic.InboundId = inbound.Id
// Unmarshal settings to get clients
settings := map[string][]model.Client{}
if err := json.Unmarshal([]byte(inbound.Settings), &settings); err != nil {
logger.Errorf("Error unmarshalling inbound settings for inbound ID %d: %v", inbound.Id, err)
return nil, err
}
clients := settings["clients"]
for _, client := range clients {
if (client.ID == query || client.Password == query) && client.Email != "" {
traffic.Email = client.Email
break
}
}
if traffic.Email == "" {
logger.Warningf("No client found with query %s in inbound ID %d", query, inbound.Id)
return nil, gorm.ErrRecordNotFound
}
// Retrieve ClientTraffic based on the found email
err = db.Model(xray.ClientTraffic{}).Where("email = ?", traffic.Email).First(traffic).Error
if err != nil {
if err == gorm.ErrRecordNotFound {
logger.Warningf("ClientTraffic for email %s not found: %v", traffic.Email, err)
return nil, err
}
logger.Errorf("Error retrieving ClientTraffic for email %s: %v", traffic.Email, err)
return nil, err
}
return traffic, nil
}
func (s *InboundService) GetInboundClientIps(clientEmail string) (string, error) {
db := database.GetDB()
InboundClientIps := &model.InboundClientIps{}
err := db.Model(model.InboundClientIps{}).Where("client_email = ?", clientEmail).First(InboundClientIps).Error
if err != nil {
return "", err
}
if InboundClientIps.Ips == "" {
return "", nil
}
// Try to parse as new format (with timestamps)
type IPWithTimestamp struct {
IP string `json:"ip"`
Timestamp int64 `json:"timestamp"`
}
var ipsWithTime []IPWithTimestamp
err = json.Unmarshal([]byte(InboundClientIps.Ips), &ipsWithTime)
// If successfully parsed as new format, return with timestamps
if err == nil && len(ipsWithTime) > 0 {
return InboundClientIps.Ips, nil
}
// Otherwise, assume it's old format (simple string array)
// Try to parse as simple array and convert to new format
var oldIps []string
err = json.Unmarshal([]byte(InboundClientIps.Ips), &oldIps)
if err == nil && len(oldIps) > 0 {
// Convert old format to new format with current timestamp
newIpsWithTime := make([]IPWithTimestamp, len(oldIps))
for i, ip := range oldIps {
newIpsWithTime[i] = IPWithTimestamp{
IP: ip,
Timestamp: time.Now().Unix(),
}
}
result, _ := json.Marshal(newIpsWithTime)
return string(result), nil
}
// Return as-is if parsing fails
return InboundClientIps.Ips, nil
}
func (s *InboundService) ClearClientIps(clientEmail string) error {
db := database.GetDB()
result := db.Model(model.InboundClientIps{}).
Where("client_email = ?", clientEmail).
Update("ips", "")
err := result.Error
if err != nil {
return err
}
return nil
}
func (s *InboundService) SearchInbounds(query string) ([]*model.Inbound, error) {
db := database.GetDB()
var inbounds []*model.Inbound
err := db.Model(model.Inbound{}).Preload("ClientStats").Where("remark like ?", "%"+query+"%").Find(&inbounds).Error
if err != nil && err != gorm.ErrRecordNotFound {
return nil, err
}
return inbounds, nil
}
func (s *InboundService) MigrationRequirements() {
db := database.GetDB()
tx := db.Begin()
var err error
defer func() {
if err == nil {
tx.Commit()
if dbErr := db.Exec(`VACUUM "main"`).Error; dbErr != nil {
logger.Warningf("VACUUM failed: %v", dbErr)
}
} else {
tx.Rollback()
}
}()
// Calculate and backfill all_time from up+down for inbounds and clients
err = tx.Exec(`
UPDATE inbounds
SET all_time = IFNULL(up, 0) + IFNULL(down, 0)
WHERE IFNULL(all_time, 0) = 0 AND (IFNULL(up, 0) + IFNULL(down, 0)) > 0
`).Error
if err != nil {
return
}
err = tx.Exec(`
UPDATE client_traffics
SET all_time = IFNULL(up, 0) + IFNULL(down, 0)
WHERE IFNULL(all_time, 0) = 0 AND (IFNULL(up, 0) + IFNULL(down, 0)) > 0
`).Error
if err != nil {
return
}
// Fix inbounds based problems
var inbounds []*model.Inbound
err = tx.Model(model.Inbound{}).Where("protocol IN (?)", []string{"vmess", "vless", "trojan"}).Find(&inbounds).Error
if err != nil && err != gorm.ErrRecordNotFound {
return
}
for inbound_index := range inbounds {
settings := map[string]any{}
json.Unmarshal([]byte(inbounds[inbound_index].Settings), &settings)
clients, ok := settings["clients"].([]any)
if ok {
// Fix Client configuration problems
var newClients []any
hasVisionFlow := false
for client_index := range clients {
c := clients[client_index].(map[string]any)
// Add email='' if it is not exists
if _, ok := c["email"]; !ok {
c["email"] = ""
}
// Convert string tgId to int64
if _, ok := c["tgId"]; ok {
var tgId any = c["tgId"]
if tgIdStr, ok2 := tgId.(string); ok2 {
tgIdInt64, err := strconv.ParseInt(strings.ReplaceAll(tgIdStr, " ", ""), 10, 64)
if err == nil {
c["tgId"] = tgIdInt64
}
}
}
// Remove "flow": "xtls-rprx-direct"
if _, ok := c["flow"]; ok {
if c["flow"] == "xtls-rprx-direct" {
c["flow"] = ""
}
}
if flow, _ := c["flow"].(string); flow == "xtls-rprx-vision" {
hasVisionFlow = true
}
// Backfill created_at and updated_at
if _, ok := c["created_at"]; !ok {
c["created_at"] = time.Now().Unix() * 1000
}
c["updated_at"] = time.Now().Unix() * 1000
newClients = append(newClients, any(c))
}
settings["clients"] = newClients
// Drop orphaned testseed: VLESS-only field, only meaningful when at least
// one client uses the exact xtls-rprx-vision flow. Older versions saved it
// for any non-empty flow (including the UDP variant) or kept it after the
// flow was cleared from the client modal — clean those up here.
if inbounds[inbound_index].Protocol == model.VLESS && !hasVisionFlow {
delete(settings, "testseed")
}
modifiedSettings, err := json.MarshalIndent(settings, "", " ")
if err != nil {
return
}
inbounds[inbound_index].Settings = string(modifiedSettings)
}
// Add client traffic row for all clients which has email
modelClients, err := s.GetClients(inbounds[inbound_index])
if err != nil {
return
}
for _, modelClient := range modelClients {
if len(modelClient.Email) > 0 {
var count int64
tx.Model(xray.ClientTraffic{}).Where("email = ?", modelClient.Email).Count(&count)
if count == 0 {
s.AddClientStat(tx, inbounds[inbound_index].Id, &modelClient)
}
}
}
}
tx.Save(inbounds)
// Remove orphaned traffics
tx.Where("inbound_id = 0").Delete(xray.ClientTraffic{})
// Migrate old MultiDomain to External Proxy
var externalProxy []struct {
Id int
Port int
StreamSettings []byte
}
err = tx.Raw(`select id, port, stream_settings
from inbounds
WHERE protocol in ('vmess','vless','trojan')
AND json_extract(stream_settings, '$.security') = 'tls'
AND json_extract(stream_settings, '$.tlsSettings.settings.domains') IS NOT NULL`).Scan(&externalProxy).Error
if err != nil || len(externalProxy) == 0 {
return
}
for _, ep := range externalProxy {
var reverses any
var stream map[string]any
json.Unmarshal(ep.StreamSettings, &stream)
if tlsSettings, ok := stream["tlsSettings"].(map[string]any); ok {
if settings, ok := tlsSettings["settings"].(map[string]any); ok {
if domains, ok := settings["domains"].([]any); ok {
for _, domain := range domains {
if domainMap, ok := domain.(map[string]any); ok {
domainMap["forceTls"] = "same"
domainMap["port"] = ep.Port
domainMap["dest"] = domainMap["domain"].(string)
delete(domainMap, "domain")
}
}
}
reverses = settings["domains"]
delete(settings, "domains")
}
}
stream["externalProxy"] = reverses
newStream, _ := json.MarshalIndent(stream, " ", " ")
tx.Model(model.Inbound{}).Where("id = ?", ep.Id).Update("stream_settings", newStream)
}
err = tx.Raw(`UPDATE inbounds
SET tag = REPLACE(tag, '0.0.0.0:', '')
WHERE INSTR(tag, '0.0.0.0:') > 0;`).Error
if err != nil {
return
}
}
func (s *InboundService) MigrateDB() {
s.MigrationRequirements()
s.MigrationRemoveOrphanedTraffics()
}
func (s *InboundService) GetOnlineClients() []string {
return p.GetOnlineClients()
}
// SetNodeOnlineClients records a remote node's online-clients list on
// the panel-wide xray.Process so GetOnlineClients returns the union of
// local + every node's contribution. Called by NodeTrafficSyncJob.
func (s *InboundService) SetNodeOnlineClients(nodeID int, emails []string) {
if p != nil {
p.SetNodeOnlineClients(nodeID, emails)
}
}
// ClearNodeOnlineClients drops one node's contribution to the online
// set. Used when the per-node sync probe fails so a downed node
// doesn't keep its clients listed as online forever.
func (s *InboundService) ClearNodeOnlineClients(nodeID int) {
if p != nil {
p.ClearNodeOnlineClients(nodeID)
}
}
func (s *InboundService) GetClientsLastOnline() (map[string]int64, error) {
db := database.GetDB()
var rows []xray.ClientTraffic
err := db.Model(&xray.ClientTraffic{}).Select("email, last_online").Find(&rows).Error
if err != nil && err != gorm.ErrRecordNotFound {
return nil, err
}
result := make(map[string]int64, len(rows))
for _, r := range rows {
result[r.Email] = r.LastOnline
}
return result, nil
}
func (s *InboundService) FilterAndSortClientEmails(emails []string) ([]string, []string, error) {
db := database.GetDB()
// Step 1: Get ClientTraffic records for emails in the input list.
// Chunked to stay under SQLite's bind-variable limit on huge inputs.
uniqEmails := uniqueNonEmptyStrings(emails)
clients := make([]xray.ClientTraffic, 0, len(uniqEmails))
for _, batch := range chunkStrings(uniqEmails, sqliteMaxVars) {
var page []xray.ClientTraffic
if err := db.Where("email IN ?", batch).Find(&page).Error; err != nil && err != gorm.ErrRecordNotFound {
return nil, nil, err
}
clients = append(clients, page...)
}
// Step 2: Sort clients by (Up + Down) descending
sort.Slice(clients, func(i, j int) bool {
return (clients[i].Up + clients[i].Down) > (clients[j].Up + clients[j].Down)
})
// Step 3: Extract sorted valid emails and track found ones
validEmails := make([]string, 0, len(clients))
found := make(map[string]bool)
for _, client := range clients {
validEmails = append(validEmails, client.Email)
found[client.Email] = true
}
// Step 4: Identify emails that were not found in the database
extraEmails := make([]string, 0)
for _, email := range emails {
if !found[email] {
extraEmails = append(extraEmails, email)
}
}
return validEmails, extraEmails, nil
}
func (s *InboundService) DelInboundClientByEmail(inboundId int, email string) (bool, error) {
oldInbound, err := s.GetInbound(inboundId)
if err != nil {
logger.Error("Load Old Data Error")
return false, err
}
var settings map[string]any
if err := json.Unmarshal([]byte(oldInbound.Settings), &settings); err != nil {
return false, err
}
interfaceClients, ok := settings["clients"].([]any)
if !ok {
return false, common.NewError("invalid clients format in inbound settings")
}
var newClients []any
needApiDel := false
found := false
for _, client := range interfaceClients {
c, ok := client.(map[string]any)
if !ok {
continue
}
if cEmail, ok := c["email"].(string); ok && cEmail == email {
// matched client, drop it
found = true
needApiDel, _ = c["enable"].(bool)
} else {
newClients = append(newClients, client)
}
}
if !found {
return false, common.NewError(fmt.Sprintf("client with email %s not found", email))
}
if len(newClients) == 0 {
return false, common.NewError("no client remained in Inbound")
}
settings["clients"] = newClients
newSettings, err := json.MarshalIndent(settings, "", " ")
if err != nil {
return false, err
}
oldInbound.Settings = string(newSettings)
db := database.GetDB()
// Drop the row and IPs only when this was the last inbound referencing
// the email — siblings still need the shared accounting state.
emailShared, err := s.emailUsedByOtherInbounds(email, inboundId)
if err != nil {
return false, err
}
if !emailShared {
if err := s.DelClientIPs(db, email); err != nil {
logger.Error("Error in delete client IPs")
return false, err
}
}
needRestart := false
// remove stats too
if len(email) > 0 && !emailShared {
traffic, err := s.GetClientTrafficByEmail(email)
if err != nil {
return false, err
}
if traffic != nil {
if err := s.DelClientStat(db, email); err != nil {
logger.Error("Delete stats Data Error")
return false, err
}
}
if needApiDel {
rt, rterr := s.runtimeFor(oldInbound)
if rterr != nil {
if oldInbound.NodeID != nil {
return false, rterr
}
needRestart = true
} else if oldInbound.NodeID == nil {
if err1 := rt.RemoveUser(context.Background(), oldInbound, email); err1 == nil {
logger.Debug("Client deleted on", rt.Name(), ":", email)
needRestart = false
} else if strings.Contains(err1.Error(), fmt.Sprintf("User %s not found.", email)) {
logger.Debug("User is already deleted. Nothing to do more...")
} else {
logger.Debug("Error in deleting client on", rt.Name(), ":", err1)
needRestart = true
}
} else {
if err1 := rt.UpdateInbound(context.Background(), oldInbound, oldInbound); err1 != nil {
return false, err1
}
}
}
}
return needRestart, db.Save(oldInbound).Error
}