mirror of
https://github.com/MHSanaei/3x-ui.git
synced 2026-02-13 13:57:59 +00:00
feat: implement 'last IP wins' policy for IP limitation (#3735)
Some checks failed
Release 3X-UI / build (386) (push) Has been cancelled
Release 3X-UI / build (amd64) (push) Has been cancelled
Release 3X-UI / build (arm64) (push) Has been cancelled
Release 3X-UI / build (armv5) (push) Has been cancelled
Release 3X-UI / build (armv6) (push) Has been cancelled
Release 3X-UI / build (armv7) (push) Has been cancelled
Release 3X-UI / build (s390x) (push) Has been cancelled
Release 3X-UI / Build for Windows (push) Has been cancelled
Some checks failed
Release 3X-UI / build (386) (push) Has been cancelled
Release 3X-UI / build (amd64) (push) Has been cancelled
Release 3X-UI / build (arm64) (push) Has been cancelled
Release 3X-UI / build (armv5) (push) Has been cancelled
Release 3X-UI / build (armv6) (push) Has been cancelled
Release 3X-UI / build (armv7) (push) Has been cancelled
Release 3X-UI / build (s390x) (push) Has been cancelled
Release 3X-UI / Build for Windows (push) Has been cancelled
- Add timestamp tracking for each client IP address - Sort IPs by connection time (newest first) instead of alphabetically - Automatically disconnect old connections when IP limit exceeded - Keep only the most recent N IPs based on LimitIP setting - Force disconnection via Xray API (RemoveUser + AddUser) - Prevents account sharing while allowing legitimate network switching - Log format: [LIMIT_IP] Email = user@example.com || Disconnecting OLD IP = 1.2.3.4 || Timestamp = 1738521234 This ensures users can seamlessly switch between networks (mobile/WiFi) and the system maintains connections from their most recent IPs only. Fixes account sharing prevention for VPN providers selling per-IP licenses. Co-authored-by: Aung Ye Zaw <zaw.a.y@phluid.world>
This commit is contained in:
parent
f87c68ea68
commit
d8fb09faae
2 changed files with 326 additions and 39 deletions
155
.github/copilot-instructions.md
vendored
Normal file
155
.github/copilot-instructions.md
vendored
Normal file
|
|
@ -0,0 +1,155 @@
|
||||||
|
# 3X-UI Development Guide
|
||||||
|
|
||||||
|
## Project Overview
|
||||||
|
3X-UI is a web-based control panel for managing Xray-core servers. It's a Go application using Gin web framework with embedded static assets and SQLite database. The panel manages VPN/proxy inbounds, monitors traffic, and provides Telegram bot integration.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Core Components
|
||||||
|
- **main.go**: Entry point that initializes database, web server, and subscription server. Handles graceful shutdown via SIGHUP/SIGTERM signals
|
||||||
|
- **web/**: Primary web server with Gin router, HTML templates, and static assets embedded via `//go:embed`
|
||||||
|
- **xray/**: Xray-core process management and API communication for traffic monitoring
|
||||||
|
- **database/**: GORM-based SQLite database with models in `database/model/`
|
||||||
|
- **sub/**: Subscription server running alongside main web server (separate port)
|
||||||
|
- **web/service/**: Business logic layer containing InboundService, SettingService, TgBot, etc.
|
||||||
|
- **web/controller/**: HTTP handlers using Gin context (`*gin.Context`)
|
||||||
|
- **web/job/**: Cron-based background jobs for traffic monitoring, CPU checks, LDAP sync
|
||||||
|
|
||||||
|
### Key Architectural Patterns
|
||||||
|
1. **Embedded Resources**: All web assets (HTML, CSS, JS, translations) are embedded at compile time using `embed.FS`:
|
||||||
|
- `web/assets` → `assetsFS`
|
||||||
|
- `web/html` → `htmlFS`
|
||||||
|
- `web/translation` → `i18nFS`
|
||||||
|
|
||||||
|
2. **Dual Server Design**: Main web panel + subscription server run concurrently, managed by `web/global` package
|
||||||
|
|
||||||
|
3. **Xray Integration**: Panel generates `config.json` for Xray binary, communicates via gRPC API for real-time traffic stats
|
||||||
|
|
||||||
|
4. **Signal-Based Restart**: SIGHUP triggers graceful restart. **Critical**: Always call `service.StopBot()` before restart to prevent Telegram bot 409 conflicts
|
||||||
|
|
||||||
|
5. **Database Seeders**: Uses `HistoryOfSeeders` model to track one-time migrations (e.g., password bcrypt migration)
|
||||||
|
|
||||||
|
## Development Workflows
|
||||||
|
|
||||||
|
### Building & Running
|
||||||
|
```bash
|
||||||
|
# Build (creates bin/3x-ui.exe)
|
||||||
|
go run tasks.json → "go: build" task
|
||||||
|
|
||||||
|
# Run with debug logging
|
||||||
|
XUI_DEBUG=true go run ./main.go
|
||||||
|
# Or use task: "go: run"
|
||||||
|
|
||||||
|
# Test
|
||||||
|
go test ./...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Command-Line Operations
|
||||||
|
The main.go accepts flags for admin tasks:
|
||||||
|
- `-reset` - Reset all panel settings to defaults
|
||||||
|
- `-show` - Display current settings (port, paths)
|
||||||
|
- Use these by running the binary directly, not via web interface
|
||||||
|
|
||||||
|
### Database Management
|
||||||
|
- DB path: Configured via `config.GetDBPath()`, typically `/etc/x-ui/x-ui.db`
|
||||||
|
- Models: Located in `database/model/model.go` - Auto-migrated on startup
|
||||||
|
- Seeders: Use `HistoryOfSeeders` to prevent re-running migrations
|
||||||
|
- Default credentials: admin/admin (hashed with bcrypt)
|
||||||
|
|
||||||
|
### Telegram Bot Development
|
||||||
|
- Bot instance in `web/service/tgbot.go` (3700+ lines)
|
||||||
|
- Uses `telego` library with long polling
|
||||||
|
- **Critical Pattern**: Must call `service.StopBot()` before any server restart to prevent 409 bot conflicts
|
||||||
|
- Bot handlers use `telegohandler.BotHandler` for routing
|
||||||
|
- i18n via embedded `i18nFS` passed to bot startup
|
||||||
|
|
||||||
|
## Code Conventions
|
||||||
|
|
||||||
|
### Service Layer Pattern
|
||||||
|
Services inject dependencies (like xray.XrayAPI) and operate on GORM models:
|
||||||
|
```go
|
||||||
|
type InboundService struct {
|
||||||
|
xrayApi xray.XrayAPI
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *InboundService) GetInbounds(userId int) ([]*model.Inbound, error) {
|
||||||
|
// Business logic here
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Controller Pattern
|
||||||
|
Controllers use Gin context and inherit from BaseController:
|
||||||
|
```go
|
||||||
|
func (a *InboundController) getInbounds(c *gin.Context) {
|
||||||
|
// Use I18nWeb(c, "key") for translations
|
||||||
|
// Check auth via checkLogin middleware
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Configuration Management
|
||||||
|
- Environment vars: `XUI_DEBUG`, `XUI_LOG_LEVEL`, `XUI_MAIN_FOLDER`
|
||||||
|
- Config embedded files: `config/version`, `config/name`
|
||||||
|
- Use `config.GetLogLevel()`, `config.GetDBPath()` helpers
|
||||||
|
|
||||||
|
### Internationalization
|
||||||
|
- Translation files: `web/translation/translate.*.toml`
|
||||||
|
- Access via `I18nWeb(c, "pages.login.loginAgain")` in controllers
|
||||||
|
- Use `locale.I18nType` enum (Web, Api, etc.)
|
||||||
|
|
||||||
|
## External Dependencies & Integration
|
||||||
|
|
||||||
|
### Xray-core
|
||||||
|
- Binary management: Download platform-specific binary (`xray-{os}-{arch}`) to bin folder
|
||||||
|
- Config generation: Panel creates `config.json` dynamically from inbound/outbound settings
|
||||||
|
- Process control: Start/stop via `xray/process.go`
|
||||||
|
- gRPC API: Real-time stats via `xray/api.go` using `google.golang.org/grpc`
|
||||||
|
|
||||||
|
### Critical External Paths
|
||||||
|
- Xray binary: `{bin_folder}/xray-{os}-{arch}`
|
||||||
|
- Xray config: `{bin_folder}/config.json`
|
||||||
|
- GeoIP/GeoSite: `{bin_folder}/geoip.dat`, `geosite.dat`
|
||||||
|
- Logs: `{log_folder}/3xipl.log`, `3xipl-banned.log`
|
||||||
|
|
||||||
|
### Job Scheduling
|
||||||
|
Uses `robfig/cron/v3` for periodic tasks:
|
||||||
|
- Traffic monitoring: `xray_traffic_job.go`
|
||||||
|
- CPU alerts: `check_cpu_usage.go`
|
||||||
|
- IP tracking: `check_client_ip_job.go`
|
||||||
|
- LDAP sync: `ldap_sync_job.go`
|
||||||
|
|
||||||
|
Jobs registered in `web/web.go` during server initialization
|
||||||
|
|
||||||
|
## Deployment & Scripts
|
||||||
|
|
||||||
|
### Installation Script Pattern
|
||||||
|
Both `install.sh` and `x-ui.sh` follow these patterns:
|
||||||
|
- Multi-distro support via `$release` variable (ubuntu, debian, centos, arch, etc.)
|
||||||
|
- Port detection with `is_port_in_use()` using ss/netstat/lsof
|
||||||
|
- Systemd service management with distro-specific unit files (`.service.debian`, `.service.arch`, `.service.rhel`)
|
||||||
|
|
||||||
|
### Docker Build
|
||||||
|
Multi-stage Dockerfile:
|
||||||
|
1. **Builder**: CGO-enabled build, runs `DockerInit.sh` to download Xray binary
|
||||||
|
2. **Final**: Alpine-based with fail2ban pre-configured
|
||||||
|
|
||||||
|
### Key File Locations (Production)
|
||||||
|
- Binary: `/usr/local/x-ui/`
|
||||||
|
- Database: `/etc/x-ui/x-ui.db`
|
||||||
|
- Logs: `/var/log/x-ui/`
|
||||||
|
- Service: `/etc/systemd/system/x-ui.service.*`
|
||||||
|
|
||||||
|
## Testing & Debugging
|
||||||
|
- Set `XUI_DEBUG=true` for detailed logging
|
||||||
|
- Check Xray process: `x-ui.sh` script provides menu for status/logs
|
||||||
|
- Database inspection: Direct SQLite access to x-ui.db
|
||||||
|
- Traffic debugging: Check `3xipl.log` for IP limit tracking
|
||||||
|
- Telegram bot: Logs show bot initialization and command handling
|
||||||
|
|
||||||
|
## Common Gotchas
|
||||||
|
1. **Bot Restart**: Always stop Telegram bot before server restart to avoid 409 conflict
|
||||||
|
2. **Embedded Assets**: Changes to HTML/CSS require recompilation (not hot-reload)
|
||||||
|
3. **Password Migration**: Seeder system tracks bcrypt migration - check `HistoryOfSeeders` table
|
||||||
|
4. **Port Binding**: Subscription server uses different port from main panel
|
||||||
|
5. **Xray Binary**: Must match OS/arch exactly - managed by installer scripts
|
||||||
|
6. **Session Management**: Uses `gin-contrib/sessions` with cookie store
|
||||||
|
7. **IP Limitation**: Implements "last IP wins" - when client exceeds LimitIP, oldest connections are automatically disconnected via Xray API to allow newest IPs
|
||||||
|
|
@ -3,6 +3,7 @@ package job
|
||||||
import (
|
import (
|
||||||
"bufio"
|
"bufio"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"log"
|
"log"
|
||||||
"os"
|
"os"
|
||||||
|
|
@ -10,6 +11,7 @@ import (
|
||||||
"regexp"
|
"regexp"
|
||||||
"runtime"
|
"runtime"
|
||||||
"sort"
|
"sort"
|
||||||
|
"strconv"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/mhsanaei/3x-ui/v2/database"
|
"github.com/mhsanaei/3x-ui/v2/database"
|
||||||
|
|
@ -18,6 +20,12 @@ import (
|
||||||
"github.com/mhsanaei/3x-ui/v2/xray"
|
"github.com/mhsanaei/3x-ui/v2/xray"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// IPWithTimestamp tracks an IP address with its last seen timestamp
|
||||||
|
type IPWithTimestamp struct {
|
||||||
|
IP string `json:"ip"`
|
||||||
|
Timestamp int64 `json:"timestamp"`
|
||||||
|
}
|
||||||
|
|
||||||
// CheckClientIpJob monitors client IP addresses from access logs and manages IP blocking based on configured limits.
|
// CheckClientIpJob monitors client IP addresses from access logs and manages IP blocking based on configured limits.
|
||||||
type CheckClientIpJob struct {
|
type CheckClientIpJob struct {
|
||||||
lastClear int64
|
lastClear int64
|
||||||
|
|
@ -119,12 +127,14 @@ func (j *CheckClientIpJob) processLogFile() bool {
|
||||||
|
|
||||||
ipRegex := regexp.MustCompile(`from (?:tcp:|udp:)?\[?([0-9a-fA-F\.:]+)\]?:\d+ accepted`)
|
ipRegex := regexp.MustCompile(`from (?:tcp:|udp:)?\[?([0-9a-fA-F\.:]+)\]?:\d+ accepted`)
|
||||||
emailRegex := regexp.MustCompile(`email: (.+)$`)
|
emailRegex := regexp.MustCompile(`email: (.+)$`)
|
||||||
|
timestampRegex := regexp.MustCompile(`^(\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2})`)
|
||||||
|
|
||||||
accessLogPath, _ := xray.GetAccessLogPath()
|
accessLogPath, _ := xray.GetAccessLogPath()
|
||||||
file, _ := os.Open(accessLogPath)
|
file, _ := os.Open(accessLogPath)
|
||||||
defer file.Close()
|
defer file.Close()
|
||||||
|
|
||||||
inboundClientIps := make(map[string]map[string]struct{}, 100)
|
// Track IPs with their last seen timestamp
|
||||||
|
inboundClientIps := make(map[string]map[string]int64, 100)
|
||||||
|
|
||||||
scanner := bufio.NewScanner(file)
|
scanner := bufio.NewScanner(file)
|
||||||
for scanner.Scan() {
|
for scanner.Scan() {
|
||||||
|
|
@ -147,28 +157,45 @@ func (j *CheckClientIpJob) processLogFile() bool {
|
||||||
}
|
}
|
||||||
email := emailMatches[1]
|
email := emailMatches[1]
|
||||||
|
|
||||||
if _, exists := inboundClientIps[email]; !exists {
|
// Extract timestamp from log line
|
||||||
inboundClientIps[email] = make(map[string]struct{})
|
var timestamp int64
|
||||||
|
timestampMatches := timestampRegex.FindStringSubmatch(line)
|
||||||
|
if len(timestampMatches) >= 2 {
|
||||||
|
t, err := time.Parse("2006/01/02 15:04:05", timestampMatches[1])
|
||||||
|
if err == nil {
|
||||||
|
timestamp = t.Unix()
|
||||||
|
} else {
|
||||||
|
timestamp = time.Now().Unix()
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
timestamp = time.Now().Unix()
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, exists := inboundClientIps[email]; !exists {
|
||||||
|
inboundClientIps[email] = make(map[string]int64)
|
||||||
|
}
|
||||||
|
// Update timestamp - keep the latest
|
||||||
|
if existingTime, ok := inboundClientIps[email][ip]; !ok || timestamp > existingTime {
|
||||||
|
inboundClientIps[email][ip] = timestamp
|
||||||
}
|
}
|
||||||
inboundClientIps[email][ip] = struct{}{}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
shouldCleanLog := false
|
shouldCleanLog := false
|
||||||
for email, uniqueIps := range inboundClientIps {
|
for email, ipTimestamps := range inboundClientIps {
|
||||||
|
|
||||||
ips := make([]string, 0, len(uniqueIps))
|
// Convert to IPWithTimestamp slice
|
||||||
for ip := range uniqueIps {
|
ipsWithTime := make([]IPWithTimestamp, 0, len(ipTimestamps))
|
||||||
ips = append(ips, ip)
|
for ip, timestamp := range ipTimestamps {
|
||||||
|
ipsWithTime = append(ipsWithTime, IPWithTimestamp{IP: ip, Timestamp: timestamp})
|
||||||
}
|
}
|
||||||
sort.Strings(ips)
|
|
||||||
|
|
||||||
clientIpsRecord, err := j.getInboundClientIps(email)
|
clientIpsRecord, err := j.getInboundClientIps(email)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
j.addInboundClientIps(email, ips)
|
j.addInboundClientIps(email, ipsWithTime)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
shouldCleanLog = j.updateInboundClientIps(clientIpsRecord, email, ips) || shouldCleanLog
|
shouldCleanLog = j.updateInboundClientIps(clientIpsRecord, email, ipsWithTime) || shouldCleanLog
|
||||||
}
|
}
|
||||||
|
|
||||||
return shouldCleanLog
|
return shouldCleanLog
|
||||||
|
|
@ -213,9 +240,9 @@ func (j *CheckClientIpJob) getInboundClientIps(clientEmail string) (*model.Inbou
|
||||||
return InboundClientIps, nil
|
return InboundClientIps, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (j *CheckClientIpJob) addInboundClientIps(clientEmail string, ips []string) error {
|
func (j *CheckClientIpJob) addInboundClientIps(clientEmail string, ipsWithTime []IPWithTimestamp) error {
|
||||||
inboundClientIps := &model.InboundClientIps{}
|
inboundClientIps := &model.InboundClientIps{}
|
||||||
jsonIps, err := json.Marshal(ips)
|
jsonIps, err := json.Marshal(ipsWithTime)
|
||||||
j.checkError(err)
|
j.checkError(err)
|
||||||
|
|
||||||
inboundClientIps.ClientEmail = clientEmail
|
inboundClientIps.ClientEmail = clientEmail
|
||||||
|
|
@ -239,16 +266,8 @@ func (j *CheckClientIpJob) addInboundClientIps(clientEmail string, ips []string)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (j *CheckClientIpJob) updateInboundClientIps(inboundClientIps *model.InboundClientIps, clientEmail string, ips []string) bool {
|
func (j *CheckClientIpJob) updateInboundClientIps(inboundClientIps *model.InboundClientIps, clientEmail string, newIpsWithTime []IPWithTimestamp) bool {
|
||||||
jsonIps, err := json.Marshal(ips)
|
// Get the inbound configuration
|
||||||
if err != nil {
|
|
||||||
logger.Error("failed to marshal IPs to JSON:", err)
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
inboundClientIps.ClientEmail = clientEmail
|
|
||||||
inboundClientIps.Ips = string(jsonIps)
|
|
||||||
|
|
||||||
inbound, err := j.getInboundByEmail(clientEmail)
|
inbound, err := j.getInboundByEmail(clientEmail)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
logger.Errorf("failed to fetch inbound settings for email %s: %s", clientEmail, err)
|
logger.Errorf("failed to fetch inbound settings for email %s: %s", clientEmail, err)
|
||||||
|
|
@ -263,9 +282,57 @@ func (j *CheckClientIpJob) updateInboundClientIps(inboundClientIps *model.Inboun
|
||||||
settings := map[string][]model.Client{}
|
settings := map[string][]model.Client{}
|
||||||
json.Unmarshal([]byte(inbound.Settings), &settings)
|
json.Unmarshal([]byte(inbound.Settings), &settings)
|
||||||
clients := settings["clients"]
|
clients := settings["clients"]
|
||||||
|
|
||||||
|
// Find the client's IP limit
|
||||||
|
var limitIp int
|
||||||
|
var clientFound bool
|
||||||
|
for _, client := range clients {
|
||||||
|
if client.Email == clientEmail {
|
||||||
|
limitIp = client.LimitIP
|
||||||
|
clientFound = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if !clientFound || limitIp <= 0 || !inbound.Enable {
|
||||||
|
// No limit or inbound disabled, just update and return
|
||||||
|
jsonIps, _ := json.Marshal(newIpsWithTime)
|
||||||
|
inboundClientIps.Ips = string(jsonIps)
|
||||||
|
db := database.GetDB()
|
||||||
|
db.Save(inboundClientIps)
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse old IPs from database
|
||||||
|
var oldIpsWithTime []IPWithTimestamp
|
||||||
|
if inboundClientIps.Ips != "" {
|
||||||
|
json.Unmarshal([]byte(inboundClientIps.Ips), &oldIpsWithTime)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Merge old and new IPs, keeping the latest timestamp for each IP
|
||||||
|
ipMap := make(map[string]int64)
|
||||||
|
for _, ipTime := range oldIpsWithTime {
|
||||||
|
ipMap[ipTime.IP] = ipTime.Timestamp
|
||||||
|
}
|
||||||
|
for _, ipTime := range newIpsWithTime {
|
||||||
|
if existingTime, ok := ipMap[ipTime.IP]; !ok || ipTime.Timestamp > existingTime {
|
||||||
|
ipMap[ipTime.IP] = ipTime.Timestamp
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert back to slice and sort by timestamp (newest first)
|
||||||
|
allIps := make([]IPWithTimestamp, 0, len(ipMap))
|
||||||
|
for ip, timestamp := range ipMap {
|
||||||
|
allIps = append(allIps, IPWithTimestamp{IP: ip, Timestamp: timestamp})
|
||||||
|
}
|
||||||
|
sort.Slice(allIps, func(i, j int) bool {
|
||||||
|
return allIps[i].Timestamp > allIps[j].Timestamp // Descending order (newest first)
|
||||||
|
})
|
||||||
|
|
||||||
shouldCleanLog := false
|
shouldCleanLog := false
|
||||||
j.disAllowedIps = []string{}
|
j.disAllowedIps = []string{}
|
||||||
|
|
||||||
|
// Open log file
|
||||||
logIpFile, err := os.OpenFile(xray.GetIPLimitLogPath(), os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0644)
|
logIpFile, err := os.OpenFile(xray.GetIPLimitLogPath(), os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0644)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
logger.Errorf("failed to open IP limit log file: %s", err)
|
logger.Errorf("failed to open IP limit log file: %s", err)
|
||||||
|
|
@ -275,27 +342,33 @@ func (j *CheckClientIpJob) updateInboundClientIps(inboundClientIps *model.Inboun
|
||||||
log.SetOutput(logIpFile)
|
log.SetOutput(logIpFile)
|
||||||
log.SetFlags(log.LstdFlags)
|
log.SetFlags(log.LstdFlags)
|
||||||
|
|
||||||
for _, client := range clients {
|
// Check if we exceed the limit
|
||||||
if client.Email == clientEmail {
|
if len(allIps) > limitIp {
|
||||||
limitIp := client.LimitIP
|
shouldCleanLog = true
|
||||||
|
|
||||||
if limitIp > 0 && inbound.Enable {
|
// Keep only the newest IPs (up to limitIp)
|
||||||
shouldCleanLog = true
|
keptIps := allIps[:limitIp]
|
||||||
|
disconnectedIps := allIps[limitIp:]
|
||||||
|
|
||||||
if limitIp < len(ips) {
|
// Log the disconnected IPs (old ones)
|
||||||
j.disAllowedIps = append(j.disAllowedIps, ips[limitIp:]...)
|
for _, ipTime := range disconnectedIps {
|
||||||
for i := limitIp; i < len(ips); i++ {
|
j.disAllowedIps = append(j.disAllowedIps, ipTime.IP)
|
||||||
log.Printf("[LIMIT_IP] Email = %s || SRC = %s", clientEmail, ips[i])
|
log.Printf("[LIMIT_IP] Email = %s || Disconnecting OLD IP = %s || Timestamp = %d", clientEmail, ipTime.IP, ipTime.Timestamp)
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
sort.Strings(j.disAllowedIps)
|
// Actually disconnect old IPs by temporarily removing and re-adding user
|
||||||
|
// This forces Xray to drop existing connections from old IPs
|
||||||
|
if len(disconnectedIps) > 0 {
|
||||||
|
j.disconnectClientTemporarily(inbound, clientEmail, clients)
|
||||||
|
}
|
||||||
|
|
||||||
if len(j.disAllowedIps) > 0 {
|
// Update database with only the newest IPs
|
||||||
logger.Debug("disAllowedIps:", j.disAllowedIps)
|
jsonIps, _ := json.Marshal(keptIps)
|
||||||
|
inboundClientIps.Ips = string(jsonIps)
|
||||||
|
} else {
|
||||||
|
// Under limit, save all IPs
|
||||||
|
jsonIps, _ := json.Marshal(allIps)
|
||||||
|
inboundClientIps.Ips = string(jsonIps)
|
||||||
}
|
}
|
||||||
|
|
||||||
db := database.GetDB()
|
db := database.GetDB()
|
||||||
|
|
@ -305,9 +378,68 @@ func (j *CheckClientIpJob) updateInboundClientIps(inboundClientIps *model.Inboun
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if len(j.disAllowedIps) > 0 {
|
||||||
|
logger.Infof("[LIMIT_IP] Client %s: Kept %d newest IPs, disconnected %d old IPs", clientEmail, limitIp, len(j.disAllowedIps))
|
||||||
|
}
|
||||||
|
|
||||||
return shouldCleanLog
|
return shouldCleanLog
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// disconnectClientTemporarily removes and re-adds a client to force disconnect old connections
|
||||||
|
func (j *CheckClientIpJob) disconnectClientTemporarily(inbound *model.Inbound, clientEmail string, clients []model.Client) {
|
||||||
|
var xrayAPI xray.XrayAPI
|
||||||
|
|
||||||
|
// Get panel settings for API port
|
||||||
|
db := database.GetDB()
|
||||||
|
var apiPort int
|
||||||
|
var apiPortSetting model.Setting
|
||||||
|
if err := db.Where("key = ?", "xrayApiPort").First(&apiPortSetting).Error; err == nil {
|
||||||
|
apiPort, _ = strconv.Atoi(apiPortSetting.Value)
|
||||||
|
}
|
||||||
|
|
||||||
|
if apiPort == 0 {
|
||||||
|
apiPort = 10085 // Default API port
|
||||||
|
}
|
||||||
|
|
||||||
|
err := xrayAPI.Init(apiPort)
|
||||||
|
if err != nil {
|
||||||
|
logger.Warningf("[LIMIT_IP] Failed to init Xray API for disconnection: %v", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer xrayAPI.Close()
|
||||||
|
|
||||||
|
// Find the client config
|
||||||
|
var clientConfig map[string]any
|
||||||
|
for _, client := range clients {
|
||||||
|
if client.Email == clientEmail {
|
||||||
|
// Convert client to map for API
|
||||||
|
clientBytes, _ := json.Marshal(client)
|
||||||
|
json.Unmarshal(clientBytes, &clientConfig)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if clientConfig == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remove user to disconnect all connections
|
||||||
|
err = xrayAPI.RemoveUser(inbound.Tag, clientEmail)
|
||||||
|
if err != nil {
|
||||||
|
logger.Warningf("[LIMIT_IP] Failed to remove user %s: %v", clientEmail, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Wait a moment for disconnection to take effect
|
||||||
|
time.Sleep(100 * time.Millisecond)
|
||||||
|
|
||||||
|
// Re-add user to allow new connections
|
||||||
|
err = xrayAPI.AddUser(string(inbound.Protocol), inbound.Tag, clientConfig)
|
||||||
|
if err != nil {
|
||||||
|
logger.Warningf("[LIMIT_IP] Failed to re-add user %s: %v", clientEmail, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func (j *CheckClientIpJob) getInboundByEmail(clientEmail string) (*model.Inbound, error) {
|
func (j *CheckClientIpJob) getInboundByEmail(clientEmail string) (*model.Inbound, error) {
|
||||||
db := database.GetDB()
|
db := database.GetDB()
|
||||||
inbound := &model.Inbound{}
|
inbound := &model.Inbound{}
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue