mirror of
https://github.com/juanfont/headscale.git
synced 2024-11-30 02:43:05 +00:00
Compare commits
1 commit
c6f41c9054
...
11861028c4
Author | SHA1 | Date | |
---|---|---|---|
|
11861028c4 |
35 changed files with 392 additions and 1609 deletions
1
.github/workflows/test-integration.yaml
vendored
1
.github/workflows/test-integration.yaml
vendored
|
@ -21,7 +21,6 @@ jobs:
|
||||||
- TestPolicyUpdateWhileRunningWithCLIInDatabase
|
- TestPolicyUpdateWhileRunningWithCLIInDatabase
|
||||||
- TestOIDCAuthenticationPingAll
|
- TestOIDCAuthenticationPingAll
|
||||||
- TestOIDCExpireNodesBasedOnTokenExpiry
|
- TestOIDCExpireNodesBasedOnTokenExpiry
|
||||||
- TestOIDC024UserCreation
|
|
||||||
- TestAuthWebFlowAuthenticationPingAll
|
- TestAuthWebFlowAuthenticationPingAll
|
||||||
- TestAuthWebFlowLogoutAndRelogin
|
- TestAuthWebFlowLogoutAndRelogin
|
||||||
- TestUserCommand
|
- TestUserCommand
|
||||||
|
|
78
CHANGELOG.md
78
CHANGELOG.md
|
@ -2,82 +2,16 @@
|
||||||
|
|
||||||
## Next
|
## Next
|
||||||
|
|
||||||
### Security fix: OIDC changes in Headscale 0.24.0
|
|
||||||
|
|
||||||
_Headscale v0.23.0 and earlier_ identified OIDC users by the "username" part of their email address (when `strip_email_domain: true`, the default) or whole email address (when `strip_email_domain: false`).
|
|
||||||
|
|
||||||
Depending on how Headscale and your Identity Provider (IdP) were configured, only using the `email` claim could allow a malicious user with an IdP account to take over another Headscale user's account, even when `strip_email_domain: false`.
|
|
||||||
|
|
||||||
This would also cause a user to lose access to their Headscale account if they changed their email address.
|
|
||||||
|
|
||||||
_Headscale v0.24.0_ now identifies OIDC users by the `iss` and `sub` claims. [These are guaranteed by the OIDC specification to be stable and unique](https://openid.net/specs/openid-connect-core-1_0.html#ClaimStability), even if a user changes email address. A well-designed IdP will typically set `sub` to an opaque identifier like a UUID or numeric ID, which has no relation to the user's name or email address.
|
|
||||||
|
|
||||||
This issue _only_ affects Headscale installations which authenticate with OIDC.
|
|
||||||
|
|
||||||
Headscale v0.24.0 and later will also automatically update profile fields with OIDC data on login. This means that users can change those details in your IdP, and have it populate to Headscale automatically the next time they log in. However, this may affect the way you reference users in policies.
|
|
||||||
|
|
||||||
#### Migrating existing installations
|
|
||||||
|
|
||||||
Headscale v0.23.0 and earlier never recorded the `iss` and `sub` fields, so all legacy (existing) OIDC accounts from _need to be migrated_ to be properly secured.
|
|
||||||
|
|
||||||
Headscale v0.24.0 has an automatic migration feature, which is enabled by default (`map_legacy_users: true`). **This will be disabled by default in a future version of Headscale – any unmigrated users will get new accounts.**
|
|
||||||
|
|
||||||
Headscale v0.24.0 will ignore any `email` claim if the IdP does not provide an `email_verified` claim set to `true`. [What "verified" actually means is contextually dependent](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims) – Headscale uses it as a signal that the contents of the `email` claim is reasonably trustworthy.
|
|
||||||
|
|
||||||
Headscale v0.23.0 and earlier never checked the `email_verified` claim. This means even if an IdP explicitly indicated to Headscale that its `email` claim was untrustworthy, Headscale would have still accepted it.
|
|
||||||
|
|
||||||
##### What does automatic migration do?
|
|
||||||
|
|
||||||
When automatic migration is enabled (`map_legacy_users: true`), Headscale will first match an OIDC account to a Headscale account by `iss` and `sub`, and then fall back to matching OIDC users similarly to how Headscale v0.23.0 did:
|
|
||||||
|
|
||||||
- If `strip_email_domain: true` (the default): the Headscale username matches the "username" part of their email address.
|
|
||||||
- If `strip_email_domain: false`: the Headscale username matches the _whole_ email address.
|
|
||||||
|
|
||||||
On migration, Headscale will change the account's username to their `preferred_username`. **This could break any ACLs or policies which are configured to match by username.**
|
|
||||||
|
|
||||||
Like with Headscale v0.23.0 and earlier, this migration only works for users who haven't changed their email address since their last Headscale login.
|
|
||||||
|
|
||||||
A _successful_ automated migration should otherwise be transparent to users.
|
|
||||||
|
|
||||||
Once a Headscale account has been migrated, it will be _unavailable_ to be matched by the legacy process. An OIDC login with a matching username, but _non-matching_ `iss` and `sub` will instead get a _new_ Headscale account.
|
|
||||||
|
|
||||||
Because of the way OIDC works, Headscale's automated migration process can _only_ work when a user tries to log in after the update. Mass updates would require Headscale implement a protocol like SCIM, which is **extremely** complicated and not available in all identity providers.
|
|
||||||
|
|
||||||
Administrators could also attempt to migrate users manually by editing the database, using their own mapping rules with known-good data sources.
|
|
||||||
|
|
||||||
Legacy account migration should have no effect on new installations where all users have a recorded `sub` and `iss`.
|
|
||||||
|
|
||||||
##### What happens when automatic migration is disabled?
|
|
||||||
|
|
||||||
When automatic migration is disabled (`map_legacy_users: false`), Headscale will only try to match an OIDC account to a Headscale account by `iss` and `sub`.
|
|
||||||
|
|
||||||
If there is no match, it will get a _new_ Headscale account – even if there was a legacy account which _could_ have matched and migrated.
|
|
||||||
|
|
||||||
We recommend new Headscale users explicitly disable automatic migration – but it should otherwise have no effect if every account has a recorded `iss` and `sub`.
|
|
||||||
|
|
||||||
When automatic migration is disabled, the `strip_email_domain` setting will have no effect.
|
|
||||||
|
|
||||||
Special thanks to @micolous for reviewing, proposing and working with us on these changes.
|
|
||||||
|
|
||||||
#### Other OIDC changes
|
|
||||||
|
|
||||||
Headscale now uses [the standard OIDC claims](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims) to populate and update user information every time they log in:
|
|
||||||
|
|
||||||
| Headscale profile field | OIDC claim | Notes / examples |
|
|
||||||
| ----------------------- | -------------------- | --------------------------------------------------------------------------------------------------------- |
|
|
||||||
| email address | `email` | Only used when `"email_verified": true` |
|
|
||||||
| display name | `name` | eg: `Sam Smith` |
|
|
||||||
| username | `preferred_username` | Varies depending on IdP and configuration, eg: `ssmith`, `ssmith@idp.example.com`, `\\example.com\ssmith` |
|
|
||||||
| profile picture | `picture` | URL to a profile picture or avatar |
|
|
||||||
|
|
||||||
These should show up nicely in the Tailscale client.
|
|
||||||
|
|
||||||
This will also affect the way you [reference users in policies](https://github.com/juanfont/headscale/pull/2205).
|
|
||||||
|
|
||||||
### BREAKING
|
### BREAKING
|
||||||
|
|
||||||
- Remove `dns.use_username_in_magic_dns` configuration option [#2020](https://github.com/juanfont/headscale/pull/2020)
|
- Remove `dns.use_username_in_magic_dns` configuration option [#2020](https://github.com/juanfont/headscale/pull/2020)
|
||||||
- Having usernames in magic DNS is no longer possible.
|
- Having usernames in magic DNS is no longer possible.
|
||||||
|
- Redo OpenID Connect configuration [#2020](https://github.com/juanfont/headscale/pull/2020)
|
||||||
|
- `strip_email_domain` has been removed, domain is _always_ part of the username for OIDC.
|
||||||
|
- Users are now identified by `sub` claim in the ID token instead of username, allowing the username, name and email to be updated.
|
||||||
|
- User has been extended to store username, display name, profile picture url and email.
|
||||||
|
- These fields are forwarded to the client, and shows up nicely in the user switcher.
|
||||||
|
- These fields can be made available via the API/CLI for non-OIDC users in the future.
|
||||||
- Remove versions older than 1.56 [#2149](https://github.com/juanfont/headscale/pull/2149)
|
- Remove versions older than 1.56 [#2149](https://github.com/juanfont/headscale/pull/2149)
|
||||||
- Clean up old code required by old versions
|
- Clean up old code required by old versions
|
||||||
|
|
||||||
|
|
|
@ -1,10 +1,8 @@
|
||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"net"
|
"net"
|
||||||
"net/http"
|
|
||||||
"os"
|
"os"
|
||||||
"strconv"
|
"strconv"
|
||||||
"time"
|
"time"
|
||||||
|
@ -66,19 +64,6 @@ func mockOIDC() error {
|
||||||
accessTTL = newTTL
|
accessTTL = newTTL
|
||||||
}
|
}
|
||||||
|
|
||||||
userStr := os.Getenv("MOCKOIDC_USERS")
|
|
||||||
if userStr == "" {
|
|
||||||
return fmt.Errorf("MOCKOIDC_USERS not defined")
|
|
||||||
}
|
|
||||||
|
|
||||||
var users []mockoidc.MockUser
|
|
||||||
err := json.Unmarshal([]byte(userStr), &users)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("unmarshalling users: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Info().Interface("users", users).Msg("loading users from JSON")
|
|
||||||
|
|
||||||
log.Info().Msgf("Access token TTL: %s", accessTTL)
|
log.Info().Msgf("Access token TTL: %s", accessTTL)
|
||||||
|
|
||||||
port, err := strconv.Atoi(portStr)
|
port, err := strconv.Atoi(portStr)
|
||||||
|
@ -86,7 +71,7 @@ func mockOIDC() error {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
mock, err := getMockOIDC(clientID, clientSecret, users)
|
mock, err := getMockOIDC(clientID, clientSecret)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -108,18 +93,12 @@ func mockOIDC() error {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func getMockOIDC(clientID string, clientSecret string, users []mockoidc.MockUser) (*mockoidc.MockOIDC, error) {
|
func getMockOIDC(clientID string, clientSecret string) (*mockoidc.MockOIDC, error) {
|
||||||
keypair, err := mockoidc.NewKeypair(nil)
|
keypair, err := mockoidc.NewKeypair(nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
userQueue := mockoidc.UserQueue{}
|
|
||||||
|
|
||||||
for _, user := range users {
|
|
||||||
userQueue.Push(&user)
|
|
||||||
}
|
|
||||||
|
|
||||||
mock := mockoidc.MockOIDC{
|
mock := mockoidc.MockOIDC{
|
||||||
ClientID: clientID,
|
ClientID: clientID,
|
||||||
ClientSecret: clientSecret,
|
ClientSecret: clientSecret,
|
||||||
|
@ -128,19 +107,9 @@ func getMockOIDC(clientID string, clientSecret string, users []mockoidc.MockUser
|
||||||
CodeChallengeMethodsSupported: []string{"plain", "S256"},
|
CodeChallengeMethodsSupported: []string{"plain", "S256"},
|
||||||
Keypair: keypair,
|
Keypair: keypair,
|
||||||
SessionStore: mockoidc.NewSessionStore(),
|
SessionStore: mockoidc.NewSessionStore(),
|
||||||
UserQueue: &userQueue,
|
UserQueue: &mockoidc.UserQueue{},
|
||||||
ErrorQueue: &mockoidc.ErrorQueue{},
|
ErrorQueue: &mockoidc.ErrorQueue{},
|
||||||
}
|
}
|
||||||
|
|
||||||
mock.AddMiddleware(func(h http.Handler) http.Handler {
|
|
||||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
log.Info().Msgf("Request: %+v", r)
|
|
||||||
h.ServeHTTP(w, r)
|
|
||||||
if r.Response != nil {
|
|
||||||
log.Info().Msgf("Response: %+v", r.Response)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
})
|
|
||||||
|
|
||||||
return &mock, nil
|
return &mock, nil
|
||||||
}
|
}
|
||||||
|
|
|
@ -168,11 +168,6 @@ database:
|
||||||
# https://www.sqlite.org/wal.html
|
# https://www.sqlite.org/wal.html
|
||||||
write_ahead_log: true
|
write_ahead_log: true
|
||||||
|
|
||||||
# Maximum number of WAL file frames before the WAL file is automatically checkpointed.
|
|
||||||
# https://www.sqlite.org/c3ref/wal_autocheckpoint.html
|
|
||||||
# Set to 0 to disable automatic checkpointing.
|
|
||||||
wal_autocheckpoint: 1000
|
|
||||||
|
|
||||||
# # Postgres config
|
# # Postgres config
|
||||||
# Please note that using Postgres is highly discouraged as it is only supported for legacy reasons.
|
# Please note that using Postgres is highly discouraged as it is only supported for legacy reasons.
|
||||||
# See database.type for more information.
|
# See database.type for more information.
|
||||||
|
@ -369,18 +364,12 @@ unix_socket_permission: "0770"
|
||||||
# allowed_users:
|
# allowed_users:
|
||||||
# - alice@example.com
|
# - alice@example.com
|
||||||
#
|
#
|
||||||
# # Map legacy users from pre-0.24.0 versions of headscale to the new OIDC users
|
# # If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
|
||||||
# # by taking the username from the legacy user and matching it with the username
|
# # This will transform `first-name.last-name@example.com` to the user `first-name.last-name`
|
||||||
# # provided by the OIDC. This is useful when migrating from legacy users to OIDC
|
# # If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
|
||||||
# # to force them using the unique identifier from the OIDC and to give them a
|
# user: `first-name.last-name.example.com`
|
||||||
# # proper display name and picture if available.
|
#
|
||||||
# # Note that this will only work if the username from the legacy user is the same
|
# strip_email_domain: true
|
||||||
# # and ther is a posibility for account takeover should a username have changed
|
|
||||||
# # with the provider.
|
|
||||||
# # Disabling this feature will cause all new logins to be created as new users.
|
|
||||||
# # Note this option will be removed in the future and should be set to false
|
|
||||||
# # on all new installations, or when all users have logged in with OIDC once.
|
|
||||||
# map_legacy_users: true
|
|
||||||
|
|
||||||
# Logtail configuration
|
# Logtail configuration
|
||||||
# Logtail is Tailscales logging and auditing infrastructure, it allows the control panel
|
# Logtail is Tailscales logging and auditing infrastructure, it allows the control panel
|
||||||
|
|
|
@ -20,11 +20,11 @@
|
||||||
},
|
},
|
||||||
"nixpkgs": {
|
"nixpkgs": {
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1731890469,
|
"lastModified": 1731763621,
|
||||||
"narHash": "sha256-D1FNZ70NmQEwNxpSSdTXCSklBH1z2isPR84J6DQrJGs=",
|
"narHash": "sha256-ddcX4lQL0X05AYkrkV2LMFgGdRvgap7Ho8kgon3iWZk=",
|
||||||
"owner": "NixOS",
|
"owner": "NixOS",
|
||||||
"repo": "nixpkgs",
|
"repo": "nixpkgs",
|
||||||
"rev": "5083ec887760adfe12af64830a66807423a859a7",
|
"rev": "c69a9bffbecde46b4b939465422ddc59493d3e4d",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
|
|
|
@ -32,7 +32,7 @@
|
||||||
|
|
||||||
# When updating go.mod or go.sum, a new sha will need to be calculated,
|
# When updating go.mod or go.sum, a new sha will need to be calculated,
|
||||||
# update this if you have a mismatch after doing a change to thos files.
|
# update this if you have a mismatch after doing a change to thos files.
|
||||||
vendorHash = "sha256-4VNiHUblvtcl9UetwiL6ZeVYb0h2e9zhYVsirhAkvOg=";
|
vendorHash = "sha256-Qoqu2k4vvnbRFLmT/v8lI+HCEWqJsHFs8uZRfNmwQpo=";
|
||||||
|
|
||||||
subPackages = ["cmd/headscale"];
|
subPackages = ["cmd/headscale"];
|
||||||
|
|
||||||
|
@ -102,7 +102,6 @@
|
||||||
ko
|
ko
|
||||||
yq-go
|
yq-go
|
||||||
ripgrep
|
ripgrep
|
||||||
postgresql
|
|
||||||
|
|
||||||
# 'dot' is needed for pprof graphs
|
# 'dot' is needed for pprof graphs
|
||||||
# go tool pprof -http=: <source>
|
# go tool pprof -http=: <source>
|
||||||
|
|
2
go.mod
2
go.mod
|
@ -49,7 +49,6 @@ require (
|
||||||
gorm.io/gorm v1.25.11
|
gorm.io/gorm v1.25.11
|
||||||
tailscale.com v1.75.0-pre.0.20240926101731-7d1160ddaab7
|
tailscale.com v1.75.0-pre.0.20240926101731-7d1160ddaab7
|
||||||
zgo.at/zcache/v2 v2.1.0
|
zgo.at/zcache/v2 v2.1.0
|
||||||
zombiezen.com/go/postgrestest v1.0.1
|
|
||||||
)
|
)
|
||||||
|
|
||||||
require (
|
require (
|
||||||
|
@ -135,7 +134,6 @@ require (
|
||||||
github.com/kortschak/wol v0.0.0-20200729010619-da482cc4850a // indirect
|
github.com/kortschak/wol v0.0.0-20200729010619-da482cc4850a // indirect
|
||||||
github.com/kr/pretty v0.3.1 // indirect
|
github.com/kr/pretty v0.3.1 // indirect
|
||||||
github.com/kr/text v0.2.0 // indirect
|
github.com/kr/text v0.2.0 // indirect
|
||||||
github.com/lib/pq v1.10.9 // indirect
|
|
||||||
github.com/lithammer/fuzzysearch v1.1.8 // indirect
|
github.com/lithammer/fuzzysearch v1.1.8 // indirect
|
||||||
github.com/mattn/go-colorable v0.1.13 // indirect
|
github.com/mattn/go-colorable v0.1.13 // indirect
|
||||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||||
|
|
3
go.sum
3
go.sum
|
@ -311,7 +311,6 @@ github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||||
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
|
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
|
||||||
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
|
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
|
||||||
github.com/ledongthuc/pdf v0.0.0-20220302134840-0c2507a12d80/go.mod h1:imJHygn/1yfhB7XSJJKlFZKl/J+dCPAknuiaGOshXAs=
|
github.com/ledongthuc/pdf v0.0.0-20220302134840-0c2507a12d80/go.mod h1:imJHygn/1yfhB7XSJJKlFZKl/J+dCPAknuiaGOshXAs=
|
||||||
github.com/lib/pq v1.8.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
|
||||||
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
|
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
|
||||||
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||||
github.com/lithammer/fuzzysearch v1.1.8 h1:/HIuJnjHuXS8bKaiTMeeDlW2/AyIWk2brx1V8LFgLN4=
|
github.com/lithammer/fuzzysearch v1.1.8 h1:/HIuJnjHuXS8bKaiTMeeDlW2/AyIWk2brx1V8LFgLN4=
|
||||||
|
@ -732,5 +731,3 @@ tailscale.com v1.75.0-pre.0.20240926101731-7d1160ddaab7 h1:nfRWV6ECxwNvvXKtbqSVs
|
||||||
tailscale.com v1.75.0-pre.0.20240926101731-7d1160ddaab7/go.mod h1:xKxYf3B3PuezFlRaMT+VhuVu8XTFUTLy+VCzLPMJVmg=
|
tailscale.com v1.75.0-pre.0.20240926101731-7d1160ddaab7/go.mod h1:xKxYf3B3PuezFlRaMT+VhuVu8XTFUTLy+VCzLPMJVmg=
|
||||||
zgo.at/zcache/v2 v2.1.0 h1:USo+ubK+R4vtjw4viGzTe/zjXyPw6R7SK/RL3epBBxs=
|
zgo.at/zcache/v2 v2.1.0 h1:USo+ubK+R4vtjw4viGzTe/zjXyPw6R7SK/RL3epBBxs=
|
||||||
zgo.at/zcache/v2 v2.1.0/go.mod h1:gyCeoLVo01QjDZynjime8xUGHHMbsLiPyUTBpDGd4Gk=
|
zgo.at/zcache/v2 v2.1.0/go.mod h1:gyCeoLVo01QjDZynjime8xUGHHMbsLiPyUTBpDGd4Gk=
|
||||||
zombiezen.com/go/postgrestest v1.0.1 h1:aXoADQAJmZDU3+xilYVut0pHhgc0sF8ZspPW9gFNwP4=
|
|
||||||
zombiezen.com/go/postgrestest v1.0.1/go.mod h1:marlZezr+k2oSJrvXHnZUs1olHqpE9czlz8ZYkVxliQ=
|
|
||||||
|
|
|
@ -1029,18 +1029,14 @@ func (h *Headscale) loadACLPolicy() error {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("loading nodes from database to validate policy: %w", err)
|
return fmt.Errorf("loading nodes from database to validate policy: %w", err)
|
||||||
}
|
}
|
||||||
users, err := h.db.ListUsers()
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("loading users from database to validate policy: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = pol.CompileFilterRules(users, nodes)
|
_, err = pol.CompileFilterRules(nodes)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("verifying policy rules: %w", err)
|
return fmt.Errorf("verifying policy rules: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(nodes) > 0 {
|
if len(nodes) > 0 {
|
||||||
_, err = pol.CompileSSHPolicy(nodes[0], users, nodes)
|
_, err = pol.CompileSSHPolicy(nodes[0], nodes)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("verifying SSH rules: %w", err)
|
return fmt.Errorf("verifying SSH rules: %w", err)
|
||||||
}
|
}
|
||||||
|
|
|
@ -474,8 +474,6 @@ func NewHeadscaleDatabase(
|
||||||
Rollback: func(db *gorm.DB) error { return nil },
|
Rollback: func(db *gorm.DB) error { return nil },
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
// Pick up new user fields used for OIDC and to
|
|
||||||
// populate the user with more interesting information.
|
|
||||||
ID: "202407191627",
|
ID: "202407191627",
|
||||||
Migrate: func(tx *gorm.DB) error {
|
Migrate: func(tx *gorm.DB) error {
|
||||||
err := tx.AutoMigrate(&types.User{})
|
err := tx.AutoMigrate(&types.User{})
|
||||||
|
@ -487,40 +485,6 @@ func NewHeadscaleDatabase(
|
||||||
},
|
},
|
||||||
Rollback: func(db *gorm.DB) error { return nil },
|
Rollback: func(db *gorm.DB) error { return nil },
|
||||||
},
|
},
|
||||||
{
|
|
||||||
// The unique constraint of Name has been dropped
|
|
||||||
// in favour of a unique together of name and
|
|
||||||
// provider identity.
|
|
||||||
ID: "202408181235",
|
|
||||||
Migrate: func(tx *gorm.DB) error {
|
|
||||||
err := tx.AutoMigrate(&types.User{})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set up indexes and unique constraints outside of GORM, it does not support
|
|
||||||
// conditional unique constraints.
|
|
||||||
// This ensures the following:
|
|
||||||
// - A user name and provider_identifier is unique
|
|
||||||
// - A provider_identifier is unique
|
|
||||||
// - A user name is unique if there is no provider_identifier is not set
|
|
||||||
for _, idx := range []string{
|
|
||||||
"DROP INDEX IF EXISTS idx_provider_identifier",
|
|
||||||
"DROP INDEX IF EXISTS idx_name_provider_identifier",
|
|
||||||
"CREATE UNIQUE INDEX IF NOT EXISTS idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL;",
|
|
||||||
"CREATE UNIQUE INDEX IF NOT EXISTS idx_name_provider_identifier ON users (name,provider_identifier);",
|
|
||||||
"CREATE UNIQUE INDEX IF NOT EXISTS idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL;",
|
|
||||||
} {
|
|
||||||
err = tx.Exec(idx).Error
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("creating username index: %w", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
},
|
|
||||||
Rollback: func(db *gorm.DB) error { return nil },
|
|
||||||
},
|
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -579,10 +543,10 @@ func openDB(cfg types.DatabaseConfig) (*gorm.DB, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
if cfg.Sqlite.WriteAheadLog {
|
if cfg.Sqlite.WriteAheadLog {
|
||||||
if err := db.Exec(fmt.Sprintf(`
|
if err := db.Exec(`
|
||||||
PRAGMA journal_mode=WAL;
|
PRAGMA journal_mode=WAL;
|
||||||
PRAGMA wal_autocheckpoint=%d;
|
PRAGMA wal_autocheckpoint=0;
|
||||||
`, cfg.Sqlite.WALAutoCheckPoint)).Error; err != nil {
|
`).Error; err != nil {
|
||||||
return nil, fmt.Errorf("setting WAL mode: %w", err)
|
return nil, fmt.Errorf("setting WAL mode: %w", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,7 +1,6 @@
|
||||||
package db
|
package db
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"database/sql"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"net/netip"
|
"net/netip"
|
||||||
|
@ -9,7 +8,6 @@ import (
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"slices"
|
"slices"
|
||||||
"sort"
|
"sort"
|
||||||
"strings"
|
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
@ -123,12 +121,12 @@ func TestMigrations(t *testing.T) {
|
||||||
dbPath: "testdata/0-23-0-to-0-24-0-preauthkey-tags-table.sqlite",
|
dbPath: "testdata/0-23-0-to-0-24-0-preauthkey-tags-table.sqlite",
|
||||||
wantFunc: func(t *testing.T, h *HSDatabase) {
|
wantFunc: func(t *testing.T, h *HSDatabase) {
|
||||||
keys, err := Read(h.DB, func(rx *gorm.DB) ([]types.PreAuthKey, error) {
|
keys, err := Read(h.DB, func(rx *gorm.DB) ([]types.PreAuthKey, error) {
|
||||||
kratest, err := ListPreAuthKeysByUser(rx, 1) // kratest
|
kratest, err := ListPreAuthKeys(rx, "kratest")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
testkra, err := ListPreAuthKeysByUser(rx, 2) // testkra
|
testkra, err := ListPreAuthKeys(rx, "testkra")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -259,120 +257,3 @@ func testCopyOfDatabase(src string) (string, error) {
|
||||||
func emptyCache() *zcache.Cache[string, types.Node] {
|
func emptyCache() *zcache.Cache[string, types.Node] {
|
||||||
return zcache.New[string, types.Node](time.Minute, time.Hour)
|
return zcache.New[string, types.Node](time.Minute, time.Hour)
|
||||||
}
|
}
|
||||||
|
|
||||||
// requireConstraintFailed checks if the error is a constraint failure with
|
|
||||||
// either SQLite and PostgreSQL error messages.
|
|
||||||
func requireConstraintFailed(t *testing.T, err error) {
|
|
||||||
t.Helper()
|
|
||||||
require.Error(t, err)
|
|
||||||
if !strings.Contains(err.Error(), "UNIQUE constraint failed:") && !strings.Contains(err.Error(), "violates unique constraint") {
|
|
||||||
require.Failf(t, "expected error to contain a constraint failure, got: %s", err.Error())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestConstraints(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
name string
|
|
||||||
run func(*testing.T, *gorm.DB)
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
name: "no-duplicate-username-if-no-oidc",
|
|
||||||
run: func(t *testing.T, db *gorm.DB) {
|
|
||||||
_, err := CreateUser(db, "user1")
|
|
||||||
require.NoError(t, err)
|
|
||||||
_, err = CreateUser(db, "user1")
|
|
||||||
requireConstraintFailed(t, err)
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "no-oidc-duplicate-username-and-id",
|
|
||||||
run: func(t *testing.T, db *gorm.DB) {
|
|
||||||
user := types.User{
|
|
||||||
Model: gorm.Model{ID: 1},
|
|
||||||
Name: "user1",
|
|
||||||
}
|
|
||||||
user.ProviderIdentifier = sql.NullString{String: "http://test.com/user1", Valid: true}
|
|
||||||
|
|
||||||
err := db.Save(&user).Error
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
user = types.User{
|
|
||||||
Model: gorm.Model{ID: 2},
|
|
||||||
Name: "user1",
|
|
||||||
}
|
|
||||||
user.ProviderIdentifier = sql.NullString{String: "http://test.com/user1", Valid: true}
|
|
||||||
|
|
||||||
err = db.Save(&user).Error
|
|
||||||
requireConstraintFailed(t, err)
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "no-oidc-duplicate-id",
|
|
||||||
run: func(t *testing.T, db *gorm.DB) {
|
|
||||||
user := types.User{
|
|
||||||
Model: gorm.Model{ID: 1},
|
|
||||||
Name: "user1",
|
|
||||||
}
|
|
||||||
user.ProviderIdentifier = sql.NullString{String: "http://test.com/user1", Valid: true}
|
|
||||||
|
|
||||||
err := db.Save(&user).Error
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
user = types.User{
|
|
||||||
Model: gorm.Model{ID: 2},
|
|
||||||
Name: "user1.1",
|
|
||||||
}
|
|
||||||
user.ProviderIdentifier = sql.NullString{String: "http://test.com/user1", Valid: true}
|
|
||||||
|
|
||||||
err = db.Save(&user).Error
|
|
||||||
requireConstraintFailed(t, err)
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "allow-duplicate-username-cli-then-oidc",
|
|
||||||
run: func(t *testing.T, db *gorm.DB) {
|
|
||||||
_, err := CreateUser(db, "user1") // Create CLI username
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
user := types.User{
|
|
||||||
Name: "user1",
|
|
||||||
ProviderIdentifier: sql.NullString{String: "http://test.com/user1", Valid: true},
|
|
||||||
}
|
|
||||||
|
|
||||||
err = db.Save(&user).Error
|
|
||||||
require.NoError(t, err)
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "allow-duplicate-username-oidc-then-cli",
|
|
||||||
run: func(t *testing.T, db *gorm.DB) {
|
|
||||||
user := types.User{
|
|
||||||
Name: "user1",
|
|
||||||
ProviderIdentifier: sql.NullString{String: "http://test.com/user1", Valid: true},
|
|
||||||
}
|
|
||||||
|
|
||||||
err := db.Save(&user).Error
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
_, err = CreateUser(db, "user1") // Create CLI username
|
|
||||||
require.NoError(t, err)
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.name+"-postgres", func(t *testing.T) {
|
|
||||||
db := newPostgresTestDB(t)
|
|
||||||
tt.run(t, db.DB.Debug())
|
|
||||||
})
|
|
||||||
t.Run(tt.name+"-sqlite", func(t *testing.T) {
|
|
||||||
db, err := newSQLiteTestDB()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("creating database: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
tt.run(t, db.DB.Debug())
|
|
||||||
})
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
|
@ -91,15 +91,15 @@ func (hsdb *HSDatabase) ListEphemeralNodes() (types.Nodes, error) {
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func (hsdb *HSDatabase) getNode(uid types.UserID, name string) (*types.Node, error) {
|
func (hsdb *HSDatabase) getNode(user string, name string) (*types.Node, error) {
|
||||||
return Read(hsdb.DB, func(rx *gorm.DB) (*types.Node, error) {
|
return Read(hsdb.DB, func(rx *gorm.DB) (*types.Node, error) {
|
||||||
return getNode(rx, uid, name)
|
return getNode(rx, user, name)
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// getNode finds a Node by name and user and returns the Node struct.
|
// getNode finds a Node by name and user and returns the Node struct.
|
||||||
func getNode(tx *gorm.DB, uid types.UserID, name string) (*types.Node, error) {
|
func getNode(tx *gorm.DB, user string, name string) (*types.Node, error) {
|
||||||
nodes, err := ListNodesByUser(tx, uid)
|
nodes, err := ListNodesByUser(tx, user)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
|
@ -30,10 +30,10 @@ func (s *Suite) TestGetNode(c *check.C) {
|
||||||
user, err := db.CreateUser("test")
|
user, err := db.CreateUser("test")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = db.getNode(types.UserID(user.ID), "testnode")
|
_, err = db.getNode("test", "testnode")
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
|
|
||||||
nodeKey := key.NewNode()
|
nodeKey := key.NewNode()
|
||||||
|
@ -51,7 +51,7 @@ func (s *Suite) TestGetNode(c *check.C) {
|
||||||
trx := db.DB.Save(node)
|
trx := db.DB.Save(node)
|
||||||
c.Assert(trx.Error, check.IsNil)
|
c.Assert(trx.Error, check.IsNil)
|
||||||
|
|
||||||
_, err = db.getNode(types.UserID(user.ID), "testnode")
|
_, err = db.getNode("test", "testnode")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -59,7 +59,7 @@ func (s *Suite) TestGetNodeByID(c *check.C) {
|
||||||
user, err := db.CreateUser("test")
|
user, err := db.CreateUser("test")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = db.GetNodeByID(0)
|
_, err = db.GetNodeByID(0)
|
||||||
|
@ -88,7 +88,7 @@ func (s *Suite) TestGetNodeByAnyNodeKey(c *check.C) {
|
||||||
user, err := db.CreateUser("test")
|
user, err := db.CreateUser("test")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = db.GetNodeByID(0)
|
_, err = db.GetNodeByID(0)
|
||||||
|
@ -136,7 +136,7 @@ func (s *Suite) TestHardDeleteNode(c *check.C) {
|
||||||
_, err = db.DeleteNode(&node, xsync.NewMapOf[types.NodeID, bool]())
|
_, err = db.DeleteNode(&node, xsync.NewMapOf[types.NodeID, bool]())
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = db.getNode(types.UserID(user.ID), "testnode3")
|
_, err = db.getNode(user.Name, "testnode3")
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -144,7 +144,7 @@ func (s *Suite) TestListPeers(c *check.C) {
|
||||||
user, err := db.CreateUser("test")
|
user, err := db.CreateUser("test")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = db.GetNodeByID(0)
|
_, err = db.GetNodeByID(0)
|
||||||
|
@ -190,7 +190,7 @@ func (s *Suite) TestGetACLFilteredPeers(c *check.C) {
|
||||||
for _, name := range []string{"test", "admin"} {
|
for _, name := range []string{"test", "admin"} {
|
||||||
user, err := db.CreateUser(name)
|
user, err := db.CreateUser(name)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
stor = append(stor, base{user, pak})
|
stor = append(stor, base{user, pak})
|
||||||
}
|
}
|
||||||
|
@ -256,10 +256,10 @@ func (s *Suite) TestGetACLFilteredPeers(c *check.C) {
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
c.Assert(len(testPeers), check.Equals, 9)
|
c.Assert(len(testPeers), check.Equals, 9)
|
||||||
|
|
||||||
adminRules, _, err := policy.GenerateFilterAndSSHRulesForTests(aclPolicy, adminNode, adminPeers, []types.User{*stor[0].user, *stor[1].user})
|
adminRules, _, err := policy.GenerateFilterAndSSHRulesForTests(aclPolicy, adminNode, adminPeers)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
testRules, _, err := policy.GenerateFilterAndSSHRulesForTests(aclPolicy, testNode, testPeers, []types.User{*stor[0].user, *stor[1].user})
|
testRules, _, err := policy.GenerateFilterAndSSHRulesForTests(aclPolicy, testNode, testPeers)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
peersOfAdminNode := policy.FilterNodesByACL(adminNode, adminPeers, adminRules)
|
peersOfAdminNode := policy.FilterNodesByACL(adminNode, adminPeers, adminRules)
|
||||||
|
@ -282,10 +282,10 @@ func (s *Suite) TestExpireNode(c *check.C) {
|
||||||
user, err := db.CreateUser("test")
|
user, err := db.CreateUser("test")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = db.getNode(types.UserID(user.ID), "testnode")
|
_, err = db.getNode("test", "testnode")
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
|
|
||||||
nodeKey := key.NewNode()
|
nodeKey := key.NewNode()
|
||||||
|
@ -303,7 +303,7 @@ func (s *Suite) TestExpireNode(c *check.C) {
|
||||||
}
|
}
|
||||||
db.DB.Save(node)
|
db.DB.Save(node)
|
||||||
|
|
||||||
nodeFromDB, err := db.getNode(types.UserID(user.ID), "testnode")
|
nodeFromDB, err := db.getNode("test", "testnode")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
c.Assert(nodeFromDB, check.NotNil)
|
c.Assert(nodeFromDB, check.NotNil)
|
||||||
|
|
||||||
|
@ -313,7 +313,7 @@ func (s *Suite) TestExpireNode(c *check.C) {
|
||||||
err = db.NodeSetExpiry(nodeFromDB.ID, now)
|
err = db.NodeSetExpiry(nodeFromDB.ID, now)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
nodeFromDB, err = db.getNode(types.UserID(user.ID), "testnode")
|
nodeFromDB, err = db.getNode("test", "testnode")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
c.Assert(nodeFromDB.IsExpired(), check.Equals, true)
|
c.Assert(nodeFromDB.IsExpired(), check.Equals, true)
|
||||||
|
@ -323,10 +323,10 @@ func (s *Suite) TestSetTags(c *check.C) {
|
||||||
user, err := db.CreateUser("test")
|
user, err := db.CreateUser("test")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = db.getNode(types.UserID(user.ID), "testnode")
|
_, err = db.getNode("test", "testnode")
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
|
|
||||||
nodeKey := key.NewNode()
|
nodeKey := key.NewNode()
|
||||||
|
@ -349,7 +349,7 @@ func (s *Suite) TestSetTags(c *check.C) {
|
||||||
sTags := []string{"tag:test", "tag:foo"}
|
sTags := []string{"tag:test", "tag:foo"}
|
||||||
err = db.SetTags(node.ID, sTags)
|
err = db.SetTags(node.ID, sTags)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
node, err = db.getNode(types.UserID(user.ID), "testnode")
|
node, err = db.getNode("test", "testnode")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
c.Assert(node.ForcedTags, check.DeepEquals, sTags)
|
c.Assert(node.ForcedTags, check.DeepEquals, sTags)
|
||||||
|
|
||||||
|
@ -357,7 +357,7 @@ func (s *Suite) TestSetTags(c *check.C) {
|
||||||
eTags := []string{"tag:bar", "tag:test", "tag:unknown", "tag:test"}
|
eTags := []string{"tag:bar", "tag:test", "tag:unknown", "tag:test"}
|
||||||
err = db.SetTags(node.ID, eTags)
|
err = db.SetTags(node.ID, eTags)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
node, err = db.getNode(types.UserID(user.ID), "testnode")
|
node, err = db.getNode("test", "testnode")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
c.Assert(
|
c.Assert(
|
||||||
node.ForcedTags,
|
node.ForcedTags,
|
||||||
|
@ -368,7 +368,7 @@ func (s *Suite) TestSetTags(c *check.C) {
|
||||||
// test removing tags
|
// test removing tags
|
||||||
err = db.SetTags(node.ID, []string{})
|
err = db.SetTags(node.ID, []string{})
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
node, err = db.getNode(types.UserID(user.ID), "testnode")
|
node, err = db.getNode("test", "testnode")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
c.Assert(node.ForcedTags, check.DeepEquals, []string{})
|
c.Assert(node.ForcedTags, check.DeepEquals, []string{})
|
||||||
}
|
}
|
||||||
|
@ -558,7 +558,7 @@ func TestAutoApproveRoutes(t *testing.T) {
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
adb, err := newSQLiteTestDB()
|
adb, err := newTestDB()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
pol, err := policy.LoadACLPolicyFromBytes([]byte(tt.acl))
|
pol, err := policy.LoadACLPolicyFromBytes([]byte(tt.acl))
|
||||||
|
|
||||||
|
@ -568,7 +568,7 @@ func TestAutoApproveRoutes(t *testing.T) {
|
||||||
user, err := adb.CreateUser("test")
|
user, err := adb.CreateUser("test")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
pak, err := adb.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := adb.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
nodeKey := key.NewNode()
|
nodeKey := key.NewNode()
|
||||||
|
@ -692,7 +692,7 @@ func generateRandomNumber(t *testing.T, max int64) int64 {
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestListEphemeralNodes(t *testing.T) {
|
func TestListEphemeralNodes(t *testing.T) {
|
||||||
db, err := newSQLiteTestDB()
|
db, err := newTestDB()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("creating db: %s", err)
|
t.Fatalf("creating db: %s", err)
|
||||||
}
|
}
|
||||||
|
@ -700,10 +700,10 @@ func TestListEphemeralNodes(t *testing.T) {
|
||||||
user, err := db.CreateUser("test")
|
user, err := db.CreateUser("test")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
pakEph, err := db.CreatePreAuthKey(types.UserID(user.ID), false, true, nil, nil)
|
pakEph, err := db.CreatePreAuthKey(user.Name, false, true, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
node := types.Node{
|
node := types.Node{
|
||||||
|
@ -748,7 +748,7 @@ func TestListEphemeralNodes(t *testing.T) {
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRenameNode(t *testing.T) {
|
func TestRenameNode(t *testing.T) {
|
||||||
db, err := newSQLiteTestDB()
|
db, err := newTestDB()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("creating db: %s", err)
|
t.Fatalf("creating db: %s", err)
|
||||||
}
|
}
|
||||||
|
|
|
@ -23,27 +23,29 @@ var (
|
||||||
)
|
)
|
||||||
|
|
||||||
func (hsdb *HSDatabase) CreatePreAuthKey(
|
func (hsdb *HSDatabase) CreatePreAuthKey(
|
||||||
uid types.UserID,
|
// TODO(kradalby): Should be ID, not name
|
||||||
|
userName string,
|
||||||
reusable bool,
|
reusable bool,
|
||||||
ephemeral bool,
|
ephemeral bool,
|
||||||
expiration *time.Time,
|
expiration *time.Time,
|
||||||
aclTags []string,
|
aclTags []string,
|
||||||
) (*types.PreAuthKey, error) {
|
) (*types.PreAuthKey, error) {
|
||||||
return Write(hsdb.DB, func(tx *gorm.DB) (*types.PreAuthKey, error) {
|
return Write(hsdb.DB, func(tx *gorm.DB) (*types.PreAuthKey, error) {
|
||||||
return CreatePreAuthKey(tx, uid, reusable, ephemeral, expiration, aclTags)
|
return CreatePreAuthKey(tx, userName, reusable, ephemeral, expiration, aclTags)
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// CreatePreAuthKey creates a new PreAuthKey in a user, and returns it.
|
// CreatePreAuthKey creates a new PreAuthKey in a user, and returns it.
|
||||||
func CreatePreAuthKey(
|
func CreatePreAuthKey(
|
||||||
tx *gorm.DB,
|
tx *gorm.DB,
|
||||||
uid types.UserID,
|
// TODO(kradalby): Should be ID, not name
|
||||||
|
userName string,
|
||||||
reusable bool,
|
reusable bool,
|
||||||
ephemeral bool,
|
ephemeral bool,
|
||||||
expiration *time.Time,
|
expiration *time.Time,
|
||||||
aclTags []string,
|
aclTags []string,
|
||||||
) (*types.PreAuthKey, error) {
|
) (*types.PreAuthKey, error) {
|
||||||
user, err := GetUserByID(tx, uid)
|
user, err := GetUserByUsername(tx, userName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -87,15 +89,15 @@ func CreatePreAuthKey(
|
||||||
return &key, nil
|
return &key, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (hsdb *HSDatabase) ListPreAuthKeys(uid types.UserID) ([]types.PreAuthKey, error) {
|
func (hsdb *HSDatabase) ListPreAuthKeys(userName string) ([]types.PreAuthKey, error) {
|
||||||
return Read(hsdb.DB, func(rx *gorm.DB) ([]types.PreAuthKey, error) {
|
return Read(hsdb.DB, func(rx *gorm.DB) ([]types.PreAuthKey, error) {
|
||||||
return ListPreAuthKeysByUser(rx, uid)
|
return ListPreAuthKeys(rx, userName)
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// ListPreAuthKeysByUser returns the list of PreAuthKeys for a user.
|
// ListPreAuthKeys returns the list of PreAuthKeys for a user.
|
||||||
func ListPreAuthKeysByUser(tx *gorm.DB, uid types.UserID) ([]types.PreAuthKey, error) {
|
func ListPreAuthKeys(tx *gorm.DB, userName string) ([]types.PreAuthKey, error) {
|
||||||
user, err := GetUserByID(tx, uid)
|
user, err := GetUserByUsername(tx, userName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
|
@ -11,14 +11,14 @@ import (
|
||||||
)
|
)
|
||||||
|
|
||||||
func (*Suite) TestCreatePreAuthKey(c *check.C) {
|
func (*Suite) TestCreatePreAuthKey(c *check.C) {
|
||||||
// ID does not exist
|
_, err := db.CreatePreAuthKey("bogus", true, false, nil, nil)
|
||||||
_, err := db.CreatePreAuthKey(12345, true, false, nil, nil)
|
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
|
|
||||||
user, err := db.CreateUser("test")
|
user, err := db.CreateUser("test")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
key, err := db.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil)
|
key, err := db.CreatePreAuthKey(user.Name, true, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
// Did we get a valid key?
|
// Did we get a valid key?
|
||||||
|
@ -26,18 +26,17 @@ func (*Suite) TestCreatePreAuthKey(c *check.C) {
|
||||||
c.Assert(len(key.Key), check.Equals, 48)
|
c.Assert(len(key.Key), check.Equals, 48)
|
||||||
|
|
||||||
// Make sure the User association is populated
|
// Make sure the User association is populated
|
||||||
c.Assert(key.User.ID, check.Equals, user.ID)
|
c.Assert(key.User.Name, check.Equals, user.Name)
|
||||||
|
|
||||||
// ID does not exist
|
_, err = db.ListPreAuthKeys("bogus")
|
||||||
_, err = db.ListPreAuthKeys(1000000)
|
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
|
|
||||||
keys, err := db.ListPreAuthKeys(types.UserID(user.ID))
|
keys, err := db.ListPreAuthKeys(user.Name)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
c.Assert(len(keys), check.Equals, 1)
|
c.Assert(len(keys), check.Equals, 1)
|
||||||
|
|
||||||
// Make sure the User association is populated
|
// Make sure the User association is populated
|
||||||
c.Assert((keys)[0].User.ID, check.Equals, user.ID)
|
c.Assert((keys)[0].User.Name, check.Equals, user.Name)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (*Suite) TestExpiredPreAuthKey(c *check.C) {
|
func (*Suite) TestExpiredPreAuthKey(c *check.C) {
|
||||||
|
@ -45,7 +44,7 @@ func (*Suite) TestExpiredPreAuthKey(c *check.C) {
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
now := time.Now().Add(-5 * time.Second)
|
now := time.Now().Add(-5 * time.Second)
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), true, false, &now, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, true, false, &now, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
key, err := db.ValidatePreAuthKey(pak.Key)
|
key, err := db.ValidatePreAuthKey(pak.Key)
|
||||||
|
@ -63,7 +62,7 @@ func (*Suite) TestValidateKeyOk(c *check.C) {
|
||||||
user, err := db.CreateUser("test3")
|
user, err := db.CreateUser("test3")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, true, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
key, err := db.ValidatePreAuthKey(pak.Key)
|
key, err := db.ValidatePreAuthKey(pak.Key)
|
||||||
|
@ -75,7 +74,7 @@ func (*Suite) TestAlreadyUsedKey(c *check.C) {
|
||||||
user, err := db.CreateUser("test4")
|
user, err := db.CreateUser("test4")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
node := types.Node{
|
node := types.Node{
|
||||||
|
@ -97,7 +96,7 @@ func (*Suite) TestReusableBeingUsedKey(c *check.C) {
|
||||||
user, err := db.CreateUser("test5")
|
user, err := db.CreateUser("test5")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, true, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
node := types.Node{
|
node := types.Node{
|
||||||
|
@ -119,7 +118,7 @@ func (*Suite) TestNotReusableNotBeingUsedKey(c *check.C) {
|
||||||
user, err := db.CreateUser("test6")
|
user, err := db.CreateUser("test6")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
key, err := db.ValidatePreAuthKey(pak.Key)
|
key, err := db.ValidatePreAuthKey(pak.Key)
|
||||||
|
@ -131,7 +130,7 @@ func (*Suite) TestExpirePreauthKey(c *check.C) {
|
||||||
user, err := db.CreateUser("test3")
|
user, err := db.CreateUser("test3")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, true, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
c.Assert(pak.Expiration, check.IsNil)
|
c.Assert(pak.Expiration, check.IsNil)
|
||||||
|
|
||||||
|
@ -148,7 +147,7 @@ func (*Suite) TestNotReusableMarkedAsUsed(c *check.C) {
|
||||||
user, err := db.CreateUser("test6")
|
user, err := db.CreateUser("test6")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
pak.Used = true
|
pak.Used = true
|
||||||
db.DB.Save(&pak)
|
db.DB.Save(&pak)
|
||||||
|
@ -161,15 +160,15 @@ func (*Suite) TestPreAuthKeyACLTags(c *check.C) {
|
||||||
user, err := db.CreateUser("test8")
|
user, err := db.CreateUser("test8")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, []string{"badtag"})
|
_, err = db.CreatePreAuthKey(user.Name, false, false, nil, []string{"badtag"})
|
||||||
c.Assert(err, check.NotNil) // Confirm that malformed tags are rejected
|
c.Assert(err, check.NotNil) // Confirm that malformed tags are rejected
|
||||||
|
|
||||||
tags := []string{"tag:test1", "tag:test2"}
|
tags := []string{"tag:test1", "tag:test2"}
|
||||||
tagsWithDuplicate := []string{"tag:test1", "tag:test2", "tag:test2"}
|
tagsWithDuplicate := []string{"tag:test1", "tag:test2", "tag:test2"}
|
||||||
_, err = db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, tagsWithDuplicate)
|
_, err = db.CreatePreAuthKey(user.Name, false, false, nil, tagsWithDuplicate)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
listedPaks, err := db.ListPreAuthKeys(types.UserID(user.ID))
|
listedPaks, err := db.ListPreAuthKeys("test8")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
gotTags := listedPaks[0].Proto().GetAclTags()
|
gotTags := listedPaks[0].Proto().GetAclTags()
|
||||||
sort.Sort(sort.StringSlice(gotTags))
|
sort.Sort(sort.StringSlice(gotTags))
|
||||||
|
|
|
@ -639,7 +639,7 @@ func EnableAutoApprovedRoutes(
|
||||||
|
|
||||||
log.Trace().
|
log.Trace().
|
||||||
Str("node", node.Hostname).
|
Str("node", node.Hostname).
|
||||||
Uint("user.id", node.User.ID).
|
Str("user", node.User.Name).
|
||||||
Strs("routeApprovers", routeApprovers).
|
Strs("routeApprovers", routeApprovers).
|
||||||
Str("prefix", netip.Prefix(advertisedRoute.Prefix).String()).
|
Str("prefix", netip.Prefix(advertisedRoute.Prefix).String()).
|
||||||
Msg("looking up route for autoapproving")
|
Msg("looking up route for autoapproving")
|
||||||
|
@ -648,13 +648,8 @@ func EnableAutoApprovedRoutes(
|
||||||
if approvedAlias == node.User.Username() {
|
if approvedAlias == node.User.Username() {
|
||||||
approvedRoutes = append(approvedRoutes, advertisedRoute)
|
approvedRoutes = append(approvedRoutes, advertisedRoute)
|
||||||
} else {
|
} else {
|
||||||
users, err := ListUsers(tx)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("looking up users to expand route alias: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO(kradalby): figure out how to get this to depend on less stuff
|
// TODO(kradalby): figure out how to get this to depend on less stuff
|
||||||
approvedIps, err := aclPolicy.ExpandAlias(types.Nodes{node}, users, approvedAlias)
|
approvedIps, err := aclPolicy.ExpandAlias(types.Nodes{node}, approvedAlias)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("expanding alias %q for autoApprovers: %w", approvedAlias, err)
|
return fmt.Errorf("expanding alias %q for autoApprovers: %w", approvedAlias, err)
|
||||||
}
|
}
|
||||||
|
|
|
@ -35,10 +35,10 @@ func (s *Suite) TestGetRoutes(c *check.C) {
|
||||||
user, err := db.CreateUser("test")
|
user, err := db.CreateUser("test")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = db.getNode(types.UserID(user.ID), "test_get_route_node")
|
_, err = db.getNode("test", "test_get_route_node")
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
|
|
||||||
route, err := netip.ParsePrefix("10.0.0.0/24")
|
route, err := netip.ParsePrefix("10.0.0.0/24")
|
||||||
|
@ -79,10 +79,10 @@ func (s *Suite) TestGetEnableRoutes(c *check.C) {
|
||||||
user, err := db.CreateUser("test")
|
user, err := db.CreateUser("test")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = db.getNode(types.UserID(user.ID), "test_enable_route_node")
|
_, err = db.getNode("test", "test_enable_route_node")
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
|
|
||||||
route, err := netip.ParsePrefix(
|
route, err := netip.ParsePrefix(
|
||||||
|
@ -153,10 +153,10 @@ func (s *Suite) TestIsUniquePrefix(c *check.C) {
|
||||||
user, err := db.CreateUser("test")
|
user, err := db.CreateUser("test")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = db.getNode(types.UserID(user.ID), "test_enable_route_node")
|
_, err = db.getNode("test", "test_enable_route_node")
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
|
|
||||||
route, err := netip.ParsePrefix(
|
route, err := netip.ParsePrefix(
|
||||||
|
@ -234,10 +234,10 @@ func (s *Suite) TestDeleteRoutes(c *check.C) {
|
||||||
user, err := db.CreateUser("test")
|
user, err := db.CreateUser("test")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = db.getNode(types.UserID(user.ID), "test_enable_route_node")
|
_, err = db.getNode("test", "test_enable_route_node")
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
|
|
||||||
prefix, err := netip.ParsePrefix(
|
prefix, err := netip.ParsePrefix(
|
||||||
|
|
|
@ -1,17 +1,12 @@
|
||||||
package db
|
package db
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"log"
|
"log"
|
||||||
"net/url"
|
|
||||||
"os"
|
"os"
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/juanfont/headscale/hscontrol/types"
|
"github.com/juanfont/headscale/hscontrol/types"
|
||||||
"gopkg.in/check.v1"
|
"gopkg.in/check.v1"
|
||||||
"zombiezen.com/go/postgrestest"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func Test(t *testing.T) {
|
func Test(t *testing.T) {
|
||||||
|
@ -41,15 +36,13 @@ func (s *Suite) ResetDB(c *check.C) {
|
||||||
// }
|
// }
|
||||||
|
|
||||||
var err error
|
var err error
|
||||||
db, err = newSQLiteTestDB()
|
db, err = newTestDB()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.Fatal(err)
|
c.Fatal(err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// TODO(kradalby): make this a t.Helper when we dont depend
|
func newTestDB() (*HSDatabase, error) {
|
||||||
// on check test framework.
|
|
||||||
func newSQLiteTestDB() (*HSDatabase, error) {
|
|
||||||
var err error
|
var err error
|
||||||
tmpDir, err = os.MkdirTemp("", "headscale-db-test-*")
|
tmpDir, err = os.MkdirTemp("", "headscale-db-test-*")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -60,7 +53,7 @@ func newSQLiteTestDB() (*HSDatabase, error) {
|
||||||
|
|
||||||
db, err = NewHeadscaleDatabase(
|
db, err = NewHeadscaleDatabase(
|
||||||
types.DatabaseConfig{
|
types.DatabaseConfig{
|
||||||
Type: types.DatabaseSqlite,
|
Type: "sqlite3",
|
||||||
Sqlite: types.SqliteConfig{
|
Sqlite: types.SqliteConfig{
|
||||||
Path: tmpDir + "/headscale_test.db",
|
Path: tmpDir + "/headscale_test.db",
|
||||||
},
|
},
|
||||||
|
@ -74,53 +67,3 @@ func newSQLiteTestDB() (*HSDatabase, error) {
|
||||||
|
|
||||||
return db, nil
|
return db, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func newPostgresTestDB(t *testing.T) *HSDatabase {
|
|
||||||
t.Helper()
|
|
||||||
|
|
||||||
var err error
|
|
||||||
tmpDir, err = os.MkdirTemp("", "headscale-db-test-*")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Printf("database path: %s", tmpDir+"/headscale_test.db")
|
|
||||||
|
|
||||||
ctx := context.Background()
|
|
||||||
srv, err := postgrestest.Start(ctx)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
t.Cleanup(srv.Cleanup)
|
|
||||||
|
|
||||||
u, err := srv.CreateDatabase(ctx)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
t.Logf("created local postgres: %s", u)
|
|
||||||
pu, _ := url.Parse(u)
|
|
||||||
|
|
||||||
pass, _ := pu.User.Password()
|
|
||||||
port, _ := strconv.Atoi(pu.Port())
|
|
||||||
|
|
||||||
db, err = NewHeadscaleDatabase(
|
|
||||||
types.DatabaseConfig{
|
|
||||||
Type: types.DatabasePostgres,
|
|
||||||
Postgres: types.PostgresConfig{
|
|
||||||
Host: pu.Hostname(),
|
|
||||||
User: pu.User.Username(),
|
|
||||||
Name: strings.TrimLeft(pu.Path, "/"),
|
|
||||||
Pass: pass,
|
|
||||||
Port: port,
|
|
||||||
Ssl: "disable",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
"",
|
|
||||||
emptyCache(),
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return db
|
|
||||||
}
|
|
||||||
|
|
|
@ -28,9 +28,11 @@ func CreateUser(tx *gorm.DB, name string) (*types.User, error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
user := types.User{
|
user := types.User{}
|
||||||
Name: name,
|
if err := tx.Where("name = ?", name).First(&user).Error; err == nil {
|
||||||
|
return nil, ErrUserExists
|
||||||
}
|
}
|
||||||
|
user.Name = name
|
||||||
if err := tx.Create(&user).Error; err != nil {
|
if err := tx.Create(&user).Error; err != nil {
|
||||||
return nil, fmt.Errorf("creating user: %w", err)
|
return nil, fmt.Errorf("creating user: %w", err)
|
||||||
}
|
}
|
||||||
|
@ -38,21 +40,21 @@ func CreateUser(tx *gorm.DB, name string) (*types.User, error) {
|
||||||
return &user, nil
|
return &user, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (hsdb *HSDatabase) DestroyUser(uid types.UserID) error {
|
func (hsdb *HSDatabase) DestroyUser(name string) error {
|
||||||
return hsdb.Write(func(tx *gorm.DB) error {
|
return hsdb.Write(func(tx *gorm.DB) error {
|
||||||
return DestroyUser(tx, uid)
|
return DestroyUser(tx, name)
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// DestroyUser destroys a User. Returns error if the User does
|
// DestroyUser destroys a User. Returns error if the User does
|
||||||
// not exist or if there are nodes associated with it.
|
// not exist or if there are nodes associated with it.
|
||||||
func DestroyUser(tx *gorm.DB, uid types.UserID) error {
|
func DestroyUser(tx *gorm.DB, name string) error {
|
||||||
user, err := GetUserByID(tx, uid)
|
user, err := GetUserByUsername(tx, name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return ErrUserNotFound
|
||||||
}
|
}
|
||||||
|
|
||||||
nodes, err := ListNodesByUser(tx, uid)
|
nodes, err := ListNodesByUser(tx, name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -60,7 +62,7 @@ func DestroyUser(tx *gorm.DB, uid types.UserID) error {
|
||||||
return ErrUserStillHasNodes
|
return ErrUserStillHasNodes
|
||||||
}
|
}
|
||||||
|
|
||||||
keys, err := ListPreAuthKeysByUser(tx, uid)
|
keys, err := ListPreAuthKeys(tx, name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -78,17 +80,17 @@ func DestroyUser(tx *gorm.DB, uid types.UserID) error {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (hsdb *HSDatabase) RenameUser(uid types.UserID, newName string) error {
|
func (hsdb *HSDatabase) RenameUser(oldName, newName string) error {
|
||||||
return hsdb.Write(func(tx *gorm.DB) error {
|
return hsdb.Write(func(tx *gorm.DB) error {
|
||||||
return RenameUser(tx, uid, newName)
|
return RenameUser(tx, oldName, newName)
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// RenameUser renames a User. Returns error if the User does
|
// RenameUser renames a User. Returns error if the User does
|
||||||
// not exist or if another User exists with the new name.
|
// not exist or if another User exists with the new name.
|
||||||
func RenameUser(tx *gorm.DB, uid types.UserID, newName string) error {
|
func RenameUser(tx *gorm.DB, oldName, newName string) error {
|
||||||
var err error
|
var err error
|
||||||
oldUser, err := GetUserByID(tx, uid)
|
oldUser, err := GetUserByUsername(tx, oldName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -96,25 +98,50 @@ func RenameUser(tx *gorm.DB, uid types.UserID, newName string) error {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
_, err = GetUserByUsername(tx, newName)
|
||||||
|
if err == nil {
|
||||||
|
return ErrUserExists
|
||||||
|
}
|
||||||
|
if !errors.Is(err, ErrUserNotFound) {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
oldUser.Name = newName
|
oldUser.Name = newName
|
||||||
|
|
||||||
if err := tx.Save(&oldUser).Error; err != nil {
|
if result := tx.Save(&oldUser); result.Error != nil {
|
||||||
return err
|
return result.Error
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (hsdb *HSDatabase) GetUserByID(uid types.UserID) (*types.User, error) {
|
func (hsdb *HSDatabase) GetUserByName(name string) (*types.User, error) {
|
||||||
return Read(hsdb.DB, func(rx *gorm.DB) (*types.User, error) {
|
return Read(hsdb.DB, func(rx *gorm.DB) (*types.User, error) {
|
||||||
return GetUserByID(rx, uid)
|
return GetUserByUsername(rx, name)
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func GetUserByID(tx *gorm.DB, uid types.UserID) (*types.User, error) {
|
func GetUserByUsername(tx *gorm.DB, name string) (*types.User, error) {
|
||||||
user := types.User{}
|
user := types.User{}
|
||||||
if result := tx.First(&user, "id = ?", uid); errors.Is(
|
if result := tx.First(&user, "name = ?", name); errors.Is(
|
||||||
|
result.Error,
|
||||||
|
gorm.ErrRecordNotFound,
|
||||||
|
) {
|
||||||
|
return nil, ErrUserNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
return &user, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (hsdb *HSDatabase) GetUserByID(id types.UserID) (*types.User, error) {
|
||||||
|
return Read(hsdb.DB, func(rx *gorm.DB) (*types.User, error) {
|
||||||
|
return GetUserByID(rx, id)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func GetUserByID(tx *gorm.DB, id types.UserID) (*types.User, error) {
|
||||||
|
user := types.User{}
|
||||||
|
if result := tx.First(&user, "id = ?", id); errors.Is(
|
||||||
result.Error,
|
result.Error,
|
||||||
gorm.ErrRecordNotFound,
|
gorm.ErrRecordNotFound,
|
||||||
) {
|
) {
|
||||||
|
@ -142,69 +169,54 @@ func GetUserByOIDCIdentifier(tx *gorm.DB, id string) (*types.User, error) {
|
||||||
return &user, nil
|
return &user, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (hsdb *HSDatabase) ListUsers(where ...*types.User) ([]types.User, error) {
|
func (hsdb *HSDatabase) ListUsers() ([]types.User, error) {
|
||||||
return Read(hsdb.DB, func(rx *gorm.DB) ([]types.User, error) {
|
return Read(hsdb.DB, func(rx *gorm.DB) ([]types.User, error) {
|
||||||
return ListUsers(rx, where...)
|
return ListUsers(rx)
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// ListUsers gets all the existing users.
|
// ListUsers gets all the existing users.
|
||||||
func ListUsers(tx *gorm.DB, where ...*types.User) ([]types.User, error) {
|
func ListUsers(tx *gorm.DB) ([]types.User, error) {
|
||||||
if len(where) > 1 {
|
|
||||||
return nil, fmt.Errorf("expect 0 or 1 where User structs, got %d", len(where))
|
|
||||||
}
|
|
||||||
|
|
||||||
var user *types.User
|
|
||||||
if len(where) == 1 {
|
|
||||||
user = where[0]
|
|
||||||
}
|
|
||||||
|
|
||||||
users := []types.User{}
|
users := []types.User{}
|
||||||
if err := tx.Where(user).Find(&users).Error; err != nil {
|
if err := tx.Find(&users).Error; err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
return users, nil
|
return users, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetUserByName returns a user if the provided username is
|
// ListNodesByUser gets all the nodes in a given user.
|
||||||
// unique, and otherwise an error.
|
func ListNodesByUser(tx *gorm.DB, name string) (types.Nodes, error) {
|
||||||
func (hsdb *HSDatabase) GetUserByName(name string) (*types.User, error) {
|
err := util.CheckForFQDNRules(name)
|
||||||
users, err := hsdb.ListUsers(&types.User{Name: name})
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
user, err := GetUserByUsername(tx, name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(users) == 0 {
|
|
||||||
return nil, ErrUserNotFound
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(users) != 1 {
|
|
||||||
return nil, fmt.Errorf("expected exactly one user, found %d", len(users))
|
|
||||||
}
|
|
||||||
|
|
||||||
return &users[0], nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ListNodesByUser gets all the nodes in a given user.
|
|
||||||
func ListNodesByUser(tx *gorm.DB, uid types.UserID) (types.Nodes, error) {
|
|
||||||
nodes := types.Nodes{}
|
nodes := types.Nodes{}
|
||||||
if err := tx.Preload("AuthKey").Preload("AuthKey.User").Preload("User").Where(&types.Node{UserID: uint(uid)}).Find(&nodes).Error; err != nil {
|
if err := tx.Preload("AuthKey").Preload("AuthKey.User").Preload("User").Where(&types.Node{UserID: user.ID}).Find(&nodes).Error; err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
return nodes, nil
|
return nodes, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (hsdb *HSDatabase) AssignNodeToUser(node *types.Node, uid types.UserID) error {
|
func (hsdb *HSDatabase) AssignNodeToUser(node *types.Node, username string) error {
|
||||||
return hsdb.Write(func(tx *gorm.DB) error {
|
return hsdb.Write(func(tx *gorm.DB) error {
|
||||||
return AssignNodeToUser(tx, node, uid)
|
return AssignNodeToUser(tx, node, username)
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// AssignNodeToUser assigns a Node to a user.
|
// AssignNodeToUser assigns a Node to a user.
|
||||||
func AssignNodeToUser(tx *gorm.DB, node *types.Node, uid types.UserID) error {
|
func AssignNodeToUser(tx *gorm.DB, node *types.Node, username string) error {
|
||||||
user, err := GetUserByID(tx, uid)
|
err := util.CheckForFQDNRules(username)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
user, err := GetUserByUsername(tx, username)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,8 +1,6 @@
|
||||||
package db
|
package db
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/juanfont/headscale/hscontrol/types"
|
"github.com/juanfont/headscale/hscontrol/types"
|
||||||
"github.com/juanfont/headscale/hscontrol/util"
|
"github.com/juanfont/headscale/hscontrol/util"
|
||||||
"gopkg.in/check.v1"
|
"gopkg.in/check.v1"
|
||||||
|
@ -19,24 +17,24 @@ func (s *Suite) TestCreateAndDestroyUser(c *check.C) {
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
c.Assert(len(users), check.Equals, 1)
|
c.Assert(len(users), check.Equals, 1)
|
||||||
|
|
||||||
err = db.DestroyUser(types.UserID(user.ID))
|
err = db.DestroyUser("test")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = db.GetUserByID(types.UserID(user.ID))
|
_, err = db.GetUserByName("test")
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Suite) TestDestroyUserErrors(c *check.C) {
|
func (s *Suite) TestDestroyUserErrors(c *check.C) {
|
||||||
err := db.DestroyUser(9998)
|
err := db.DestroyUser("test")
|
||||||
c.Assert(err, check.Equals, ErrUserNotFound)
|
c.Assert(err, check.Equals, ErrUserNotFound)
|
||||||
|
|
||||||
user, err := db.CreateUser("test")
|
user, err := db.CreateUser("test")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
err = db.DestroyUser(types.UserID(user.ID))
|
err = db.DestroyUser("test")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
result := db.DB.Preload("User").First(&pak, "key = ?", pak.Key)
|
result := db.DB.Preload("User").First(&pak, "key = ?", pak.Key)
|
||||||
|
@ -46,7 +44,7 @@ func (s *Suite) TestDestroyUserErrors(c *check.C) {
|
||||||
user, err = db.CreateUser("test")
|
user, err = db.CreateUser("test")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err = db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
|
pak, err = db.CreatePreAuthKey(user.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
node := types.Node{
|
node := types.Node{
|
||||||
|
@ -59,7 +57,7 @@ func (s *Suite) TestDestroyUserErrors(c *check.C) {
|
||||||
trx := db.DB.Save(&node)
|
trx := db.DB.Save(&node)
|
||||||
c.Assert(trx.Error, check.IsNil)
|
c.Assert(trx.Error, check.IsNil)
|
||||||
|
|
||||||
err = db.DestroyUser(types.UserID(user.ID))
|
err = db.DestroyUser("test")
|
||||||
c.Assert(err, check.Equals, ErrUserStillHasNodes)
|
c.Assert(err, check.Equals, ErrUserStillHasNodes)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -72,29 +70,24 @@ func (s *Suite) TestRenameUser(c *check.C) {
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
c.Assert(len(users), check.Equals, 1)
|
c.Assert(len(users), check.Equals, 1)
|
||||||
|
|
||||||
err = db.RenameUser(types.UserID(userTest.ID), "test-renamed")
|
err = db.RenameUser("test", "test-renamed")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
users, err = db.ListUsers(&types.User{Name: "test"})
|
_, err = db.GetUserByName("test")
|
||||||
c.Assert(err, check.Equals, nil)
|
c.Assert(err, check.Equals, ErrUserNotFound)
|
||||||
c.Assert(len(users), check.Equals, 0)
|
|
||||||
|
|
||||||
users, err = db.ListUsers(&types.User{Name: "test-renamed"})
|
_, err = db.GetUserByName("test-renamed")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
c.Assert(len(users), check.Equals, 1)
|
|
||||||
|
|
||||||
err = db.RenameUser(99988, "test")
|
err = db.RenameUser("test-does-not-exit", "test")
|
||||||
c.Assert(err, check.Equals, ErrUserNotFound)
|
c.Assert(err, check.Equals, ErrUserNotFound)
|
||||||
|
|
||||||
userTest2, err := db.CreateUser("test2")
|
userTest2, err := db.CreateUser("test2")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
c.Assert(userTest2.Name, check.Equals, "test2")
|
c.Assert(userTest2.Name, check.Equals, "test2")
|
||||||
|
|
||||||
want := "UNIQUE constraint failed"
|
err = db.RenameUser("test2", "test-renamed")
|
||||||
err = db.RenameUser(types.UserID(userTest2.ID), "test-renamed")
|
c.Assert(err, check.Equals, ErrUserExists)
|
||||||
if err == nil || !strings.Contains(err.Error(), want) {
|
|
||||||
c.Fatalf("expected failure with unique constraint, want: %q got: %q", want, err)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Suite) TestSetMachineUser(c *check.C) {
|
func (s *Suite) TestSetMachineUser(c *check.C) {
|
||||||
|
@ -104,7 +97,7 @@ func (s *Suite) TestSetMachineUser(c *check.C) {
|
||||||
newUser, err := db.CreateUser("new")
|
newUser, err := db.CreateUser("new")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
pak, err := db.CreatePreAuthKey(types.UserID(oldUser.ID), false, false, nil, nil)
|
pak, err := db.CreatePreAuthKey(oldUser.Name, false, false, nil, nil)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
node := types.Node{
|
node := types.Node{
|
||||||
|
@ -118,15 +111,15 @@ func (s *Suite) TestSetMachineUser(c *check.C) {
|
||||||
c.Assert(trx.Error, check.IsNil)
|
c.Assert(trx.Error, check.IsNil)
|
||||||
c.Assert(node.UserID, check.Equals, oldUser.ID)
|
c.Assert(node.UserID, check.Equals, oldUser.ID)
|
||||||
|
|
||||||
err = db.AssignNodeToUser(&node, types.UserID(newUser.ID))
|
err = db.AssignNodeToUser(&node, newUser.Name)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
c.Assert(node.UserID, check.Equals, newUser.ID)
|
c.Assert(node.UserID, check.Equals, newUser.ID)
|
||||||
c.Assert(node.User.Name, check.Equals, newUser.Name)
|
c.Assert(node.User.Name, check.Equals, newUser.Name)
|
||||||
|
|
||||||
err = db.AssignNodeToUser(&node, 9584849)
|
err = db.AssignNodeToUser(&node, "non-existing-user")
|
||||||
c.Assert(err, check.Equals, ErrUserNotFound)
|
c.Assert(err, check.Equals, ErrUserNotFound)
|
||||||
|
|
||||||
err = db.AssignNodeToUser(&node, types.UserID(newUser.ID))
|
err = db.AssignNodeToUser(&node, newUser.Name)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
c.Assert(node.UserID, check.Equals, newUser.ID)
|
c.Assert(node.UserID, check.Equals, newUser.ID)
|
||||||
c.Assert(node.User.Name, check.Equals, newUser.Name)
|
c.Assert(node.User.Name, check.Equals, newUser.Name)
|
||||||
|
|
|
@ -65,34 +65,24 @@ func (api headscaleV1APIServer) RenameUser(
|
||||||
ctx context.Context,
|
ctx context.Context,
|
||||||
request *v1.RenameUserRequest,
|
request *v1.RenameUserRequest,
|
||||||
) (*v1.RenameUserResponse, error) {
|
) (*v1.RenameUserResponse, error) {
|
||||||
oldUser, err := api.h.db.GetUserByName(request.GetOldName())
|
err := api.h.db.RenameUser(request.GetOldName(), request.GetNewName())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
err = api.h.db.RenameUser(types.UserID(oldUser.ID), request.GetNewName())
|
user, err := api.h.db.GetUserByName(request.GetNewName())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
newUser, err := api.h.db.GetUserByName(request.GetNewName())
|
return &v1.RenameUserResponse{User: user.Proto()}, nil
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return &v1.RenameUserResponse{User: newUser.Proto()}, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (api headscaleV1APIServer) DeleteUser(
|
func (api headscaleV1APIServer) DeleteUser(
|
||||||
ctx context.Context,
|
ctx context.Context,
|
||||||
request *v1.DeleteUserRequest,
|
request *v1.DeleteUserRequest,
|
||||||
) (*v1.DeleteUserResponse, error) {
|
) (*v1.DeleteUserResponse, error) {
|
||||||
user, err := api.h.db.GetUserByName(request.GetName())
|
err := api.h.db.DestroyUser(request.GetName())
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
err = api.h.db.DestroyUser(types.UserID(user.ID))
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -141,13 +131,8 @@ func (api headscaleV1APIServer) CreatePreAuthKey(
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
user, err := api.h.db.GetUserByName(request.GetUser())
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
preAuthKey, err := api.h.db.CreatePreAuthKey(
|
preAuthKey, err := api.h.db.CreatePreAuthKey(
|
||||||
types.UserID(user.ID),
|
request.GetUser(),
|
||||||
request.GetReusable(),
|
request.GetReusable(),
|
||||||
request.GetEphemeral(),
|
request.GetEphemeral(),
|
||||||
&expiration,
|
&expiration,
|
||||||
|
@ -183,12 +168,7 @@ func (api headscaleV1APIServer) ListPreAuthKeys(
|
||||||
ctx context.Context,
|
ctx context.Context,
|
||||||
request *v1.ListPreAuthKeysRequest,
|
request *v1.ListPreAuthKeysRequest,
|
||||||
) (*v1.ListPreAuthKeysResponse, error) {
|
) (*v1.ListPreAuthKeysResponse, error) {
|
||||||
user, err := api.h.db.GetUserByName(request.GetUser())
|
preAuthKeys, err := api.h.db.ListPreAuthKeys(request.GetUser())
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
preAuthKeys, err := api.h.db.ListPreAuthKeys(types.UserID(user.ID))
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -426,20 +406,10 @@ func (api headscaleV1APIServer) ListNodes(
|
||||||
ctx context.Context,
|
ctx context.Context,
|
||||||
request *v1.ListNodesRequest,
|
request *v1.ListNodesRequest,
|
||||||
) (*v1.ListNodesResponse, error) {
|
) (*v1.ListNodesResponse, error) {
|
||||||
// TODO(kradalby): it looks like this can be simplified a lot,
|
|
||||||
// the filtering of nodes by user, vs nodes as a whole can
|
|
||||||
// probably be done once.
|
|
||||||
// TODO(kradalby): This should be done in one tx.
|
|
||||||
|
|
||||||
isLikelyConnected := api.h.nodeNotifier.LikelyConnectedMap()
|
isLikelyConnected := api.h.nodeNotifier.LikelyConnectedMap()
|
||||||
if request.GetUser() != "" {
|
if request.GetUser() != "" {
|
||||||
user, err := api.h.db.GetUserByName(request.GetUser())
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
nodes, err := db.Read(api.h.db.DB, func(rx *gorm.DB) (types.Nodes, error) {
|
nodes, err := db.Read(api.h.db.DB, func(rx *gorm.DB) (types.Nodes, error) {
|
||||||
return db.ListNodesByUser(rx, types.UserID(user.ID))
|
return db.ListNodesByUser(rx, request.GetUser())
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
@ -495,18 +465,12 @@ func (api headscaleV1APIServer) MoveNode(
|
||||||
ctx context.Context,
|
ctx context.Context,
|
||||||
request *v1.MoveNodeRequest,
|
request *v1.MoveNodeRequest,
|
||||||
) (*v1.MoveNodeResponse, error) {
|
) (*v1.MoveNodeResponse, error) {
|
||||||
// TODO(kradalby): This should be done in one tx.
|
|
||||||
node, err := api.h.db.GetNodeByID(types.NodeID(request.GetNodeId()))
|
node, err := api.h.db.GetNodeByID(types.NodeID(request.GetNodeId()))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
user, err := api.h.db.GetUserByName(request.GetUser())
|
err = api.h.db.AssignNodeToUser(node, request.GetUser())
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
err = api.h.db.AssignNodeToUser(node, types.UserID(user.ID))
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -773,18 +737,14 @@ func (api headscaleV1APIServer) SetPolicy(
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("loading nodes from database to validate policy: %w", err)
|
return nil, fmt.Errorf("loading nodes from database to validate policy: %w", err)
|
||||||
}
|
}
|
||||||
users, err := api.h.db.ListUsers()
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("loading users from database to validate policy: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = pol.CompileFilterRules(users, nodes)
|
_, err = pol.CompileFilterRules(nodes)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("verifying policy rules: %w", err)
|
return nil, fmt.Errorf("verifying policy rules: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(nodes) > 0 {
|
if len(nodes) > 0 {
|
||||||
_, err = pol.CompileSSHPolicy(nodes[0], users, nodes)
|
_, err = pol.CompileSSHPolicy(nodes[0], nodes)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("verifying SSH rules: %w", err)
|
return nil, fmt.Errorf("verifying SSH rules: %w", err)
|
||||||
}
|
}
|
||||||
|
|
|
@ -153,7 +153,6 @@ func addNextDNSMetadata(resolvers []*dnstype.Resolver, node *types.Node) {
|
||||||
func (m *Mapper) fullMapResponse(
|
func (m *Mapper) fullMapResponse(
|
||||||
node *types.Node,
|
node *types.Node,
|
||||||
peers types.Nodes,
|
peers types.Nodes,
|
||||||
users []types.User,
|
|
||||||
pol *policy.ACLPolicy,
|
pol *policy.ACLPolicy,
|
||||||
capVer tailcfg.CapabilityVersion,
|
capVer tailcfg.CapabilityVersion,
|
||||||
) (*tailcfg.MapResponse, error) {
|
) (*tailcfg.MapResponse, error) {
|
||||||
|
@ -168,7 +167,6 @@ func (m *Mapper) fullMapResponse(
|
||||||
pol,
|
pol,
|
||||||
node,
|
node,
|
||||||
capVer,
|
capVer,
|
||||||
users,
|
|
||||||
peers,
|
peers,
|
||||||
peers,
|
peers,
|
||||||
m.cfg,
|
m.cfg,
|
||||||
|
@ -191,12 +189,8 @@ func (m *Mapper) FullMapResponse(
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
users, err := m.db.ListUsers()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
resp, err := m.fullMapResponse(node, peers, users, pol, mapRequest.Version)
|
resp, err := m.fullMapResponse(node, peers, pol, mapRequest.Version)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -259,11 +253,6 @@ func (m *Mapper) PeerChangedResponse(
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
users, err := m.db.ListUsers()
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("listing users for map response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var removedIDs []tailcfg.NodeID
|
var removedIDs []tailcfg.NodeID
|
||||||
var changedIDs []types.NodeID
|
var changedIDs []types.NodeID
|
||||||
for nodeID, nodeChanged := range changed {
|
for nodeID, nodeChanged := range changed {
|
||||||
|
@ -287,7 +276,6 @@ func (m *Mapper) PeerChangedResponse(
|
||||||
pol,
|
pol,
|
||||||
node,
|
node,
|
||||||
mapRequest.Version,
|
mapRequest.Version,
|
||||||
users,
|
|
||||||
peers,
|
peers,
|
||||||
changedNodes,
|
changedNodes,
|
||||||
m.cfg,
|
m.cfg,
|
||||||
|
@ -520,17 +508,16 @@ func appendPeerChanges(
|
||||||
pol *policy.ACLPolicy,
|
pol *policy.ACLPolicy,
|
||||||
node *types.Node,
|
node *types.Node,
|
||||||
capVer tailcfg.CapabilityVersion,
|
capVer tailcfg.CapabilityVersion,
|
||||||
users []types.User,
|
|
||||||
peers types.Nodes,
|
peers types.Nodes,
|
||||||
changed types.Nodes,
|
changed types.Nodes,
|
||||||
cfg *types.Config,
|
cfg *types.Config,
|
||||||
) error {
|
) error {
|
||||||
packetFilter, err := pol.CompileFilterRules(users, append(peers, node))
|
packetFilter, err := pol.CompileFilterRules(append(peers, node))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
sshPolicy, err := pol.CompileSSHPolicy(node, users, peers)
|
sshPolicy, err := pol.CompileSSHPolicy(node, peers)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
|
@ -159,9 +159,6 @@ func Test_fullMapResponse(t *testing.T) {
|
||||||
lastSeen := time.Date(2009, time.November, 10, 23, 9, 0, 0, time.UTC)
|
lastSeen := time.Date(2009, time.November, 10, 23, 9, 0, 0, time.UTC)
|
||||||
expire := time.Date(2500, time.November, 11, 23, 0, 0, 0, time.UTC)
|
expire := time.Date(2500, time.November, 11, 23, 0, 0, 0, time.UTC)
|
||||||
|
|
||||||
user1 := types.User{Model: gorm.Model{ID: 0}, Name: "mini"}
|
|
||||||
user2 := types.User{Model: gorm.Model{ID: 1}, Name: "peer2"}
|
|
||||||
|
|
||||||
mini := &types.Node{
|
mini := &types.Node{
|
||||||
ID: 0,
|
ID: 0,
|
||||||
MachineKey: mustMK(
|
MachineKey: mustMK(
|
||||||
|
@ -176,8 +173,8 @@ func Test_fullMapResponse(t *testing.T) {
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
Hostname: "mini",
|
Hostname: "mini",
|
||||||
GivenName: "mini",
|
GivenName: "mini",
|
||||||
UserID: user1.ID,
|
UserID: 0,
|
||||||
User: user1,
|
User: types.User{Name: "mini"},
|
||||||
ForcedTags: []string{},
|
ForcedTags: []string{},
|
||||||
AuthKey: &types.PreAuthKey{},
|
AuthKey: &types.PreAuthKey{},
|
||||||
LastSeen: &lastSeen,
|
LastSeen: &lastSeen,
|
||||||
|
@ -256,8 +253,8 @@ func Test_fullMapResponse(t *testing.T) {
|
||||||
IPv4: iap("100.64.0.2"),
|
IPv4: iap("100.64.0.2"),
|
||||||
Hostname: "peer1",
|
Hostname: "peer1",
|
||||||
GivenName: "peer1",
|
GivenName: "peer1",
|
||||||
UserID: user1.ID,
|
UserID: 0,
|
||||||
User: user1,
|
User: types.User{Name: "mini"},
|
||||||
ForcedTags: []string{},
|
ForcedTags: []string{},
|
||||||
LastSeen: &lastSeen,
|
LastSeen: &lastSeen,
|
||||||
Expiry: &expire,
|
Expiry: &expire,
|
||||||
|
@ -311,8 +308,8 @@ func Test_fullMapResponse(t *testing.T) {
|
||||||
IPv4: iap("100.64.0.3"),
|
IPv4: iap("100.64.0.3"),
|
||||||
Hostname: "peer2",
|
Hostname: "peer2",
|
||||||
GivenName: "peer2",
|
GivenName: "peer2",
|
||||||
UserID: user2.ID,
|
UserID: 1,
|
||||||
User: user2,
|
User: types.User{Name: "peer2"},
|
||||||
ForcedTags: []string{},
|
ForcedTags: []string{},
|
||||||
LastSeen: &lastSeen,
|
LastSeen: &lastSeen,
|
||||||
Expiry: &expire,
|
Expiry: &expire,
|
||||||
|
@ -471,7 +468,6 @@ func Test_fullMapResponse(t *testing.T) {
|
||||||
got, err := mappy.fullMapResponse(
|
got, err := mappy.fullMapResponse(
|
||||||
tt.node,
|
tt.node,
|
||||||
tt.peers,
|
tt.peers,
|
||||||
[]types.User{user1, user2},
|
|
||||||
tt.pol,
|
tt.pol,
|
||||||
0,
|
0,
|
||||||
)
|
)
|
||||||
|
|
|
@ -436,41 +436,24 @@ func (a *AuthProviderOIDC) createOrUpdateUserFromClaim(
|
||||||
) (*types.User, error) {
|
) (*types.User, error) {
|
||||||
var user *types.User
|
var user *types.User
|
||||||
var err error
|
var err error
|
||||||
user, err = a.db.GetUserByOIDCIdentifier(claims.Identifier())
|
user, err = a.db.GetUserByOIDCIdentifier(claims.Sub)
|
||||||
if err != nil && !errors.Is(err, db.ErrUserNotFound) {
|
if err != nil && !errors.Is(err, db.ErrUserNotFound) {
|
||||||
return nil, fmt.Errorf("creating or updating user: %w", err)
|
return nil, fmt.Errorf("creating or updating user: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// This check is for legacy, if the user cannot be found by the OIDC identifier
|
// This check is for legacy, if the user cannot be found by the OIDC identifier
|
||||||
// look it up by username. This should only be needed once.
|
// look it up by username. This should only be needed once.
|
||||||
// This branch will presist for a number of versions after the OIDC migration and
|
if user == nil {
|
||||||
// then be removed following a deprecation.
|
user, err = a.db.GetUserByName(claims.Username)
|
||||||
// TODO(kradalby): Remove when strip_email_domain and migration is removed
|
|
||||||
// after #2170 is cleaned up.
|
|
||||||
if a.cfg.MapLegacyUsers && user == nil {
|
|
||||||
log.Trace().Str("username", claims.Username).Str("sub", claims.Sub).Msg("user not found by OIDC identifier, looking up by username")
|
|
||||||
if oldUsername, err := getUserName(claims, a.cfg.StripEmaildomain); err == nil {
|
|
||||||
log.Trace().Str("old_username", oldUsername).Str("sub", claims.Sub).Msg("found username")
|
|
||||||
user, err = a.db.GetUserByName(oldUsername)
|
|
||||||
if err != nil && !errors.Is(err, db.ErrUserNotFound) {
|
if err != nil && !errors.Is(err, db.ErrUserNotFound) {
|
||||||
return nil, fmt.Errorf("getting user: %w", err)
|
return nil, fmt.Errorf("creating or updating user: %w", err)
|
||||||
}
|
|
||||||
|
|
||||||
// If the user exists, but it already has a provider identifier (OIDC sub), create a new user.
|
|
||||||
// This is to prevent users that have already been migrated to the new OIDC format
|
|
||||||
// to be updated with the new OIDC identifier inexplicitly which might be the cause of an
|
|
||||||
// account takeover.
|
|
||||||
if user != nil && user.ProviderIdentifier.Valid {
|
|
||||||
log.Info().Str("username", claims.Username).Str("sub", claims.Sub).Msg("user found by username, but has provider identifier, creating new user.")
|
|
||||||
user = &types.User{}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// if the user is still not found, create a new empty user.
|
// if the user is still not found, create a new empty user.
|
||||||
if user == nil {
|
if user == nil {
|
||||||
user = &types.User{}
|
user = &types.User{}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
user.FromClaim(claims)
|
user.FromClaim(claims)
|
||||||
err = a.db.DB.Save(user).Error
|
err = a.db.DB.Save(user).Error
|
||||||
|
@ -519,24 +502,3 @@ func renderOIDCCallbackTemplate(
|
||||||
|
|
||||||
return &content, nil
|
return &content, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// TODO(kradalby): Reintroduce when strip_email_domain is removed
|
|
||||||
// after #2170 is cleaned up
|
|
||||||
// DEPRECATED: DO NOT USE
|
|
||||||
func getUserName(
|
|
||||||
claims *types.OIDCClaims,
|
|
||||||
stripEmaildomain bool,
|
|
||||||
) (string, error) {
|
|
||||||
if !claims.EmailVerified {
|
|
||||||
return "", fmt.Errorf("email not verified")
|
|
||||||
}
|
|
||||||
userName, err := util.NormalizeToFQDNRules(
|
|
||||||
claims.Email,
|
|
||||||
stripEmaildomain,
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
|
|
||||||
return userName, nil
|
|
||||||
}
|
|
||||||
|
|
|
@ -137,21 +137,20 @@ func GenerateFilterAndSSHRulesForTests(
|
||||||
policy *ACLPolicy,
|
policy *ACLPolicy,
|
||||||
node *types.Node,
|
node *types.Node,
|
||||||
peers types.Nodes,
|
peers types.Nodes,
|
||||||
users []types.User,
|
|
||||||
) ([]tailcfg.FilterRule, *tailcfg.SSHPolicy, error) {
|
) ([]tailcfg.FilterRule, *tailcfg.SSHPolicy, error) {
|
||||||
// If there is no policy defined, we default to allow all
|
// If there is no policy defined, we default to allow all
|
||||||
if policy == nil {
|
if policy == nil {
|
||||||
return tailcfg.FilterAllowAll, &tailcfg.SSHPolicy{}, nil
|
return tailcfg.FilterAllowAll, &tailcfg.SSHPolicy{}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
rules, err := policy.CompileFilterRules(users, append(peers, node))
|
rules, err := policy.CompileFilterRules(append(peers, node))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return []tailcfg.FilterRule{}, &tailcfg.SSHPolicy{}, err
|
return []tailcfg.FilterRule{}, &tailcfg.SSHPolicy{}, err
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Trace().Interface("ACL", rules).Str("node", node.GivenName).Msg("ACL rules")
|
log.Trace().Interface("ACL", rules).Str("node", node.GivenName).Msg("ACL rules")
|
||||||
|
|
||||||
sshPolicy, err := policy.CompileSSHPolicy(node, users, peers)
|
sshPolicy, err := policy.CompileSSHPolicy(node, peers)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return []tailcfg.FilterRule{}, &tailcfg.SSHPolicy{}, err
|
return []tailcfg.FilterRule{}, &tailcfg.SSHPolicy{}, err
|
||||||
}
|
}
|
||||||
|
@ -162,7 +161,6 @@ func GenerateFilterAndSSHRulesForTests(
|
||||||
// CompileFilterRules takes a set of nodes and an ACLPolicy and generates a
|
// CompileFilterRules takes a set of nodes and an ACLPolicy and generates a
|
||||||
// set of Tailscale compatible FilterRules used to allow traffic on clients.
|
// set of Tailscale compatible FilterRules used to allow traffic on clients.
|
||||||
func (pol *ACLPolicy) CompileFilterRules(
|
func (pol *ACLPolicy) CompileFilterRules(
|
||||||
users []types.User,
|
|
||||||
nodes types.Nodes,
|
nodes types.Nodes,
|
||||||
) ([]tailcfg.FilterRule, error) {
|
) ([]tailcfg.FilterRule, error) {
|
||||||
if pol == nil {
|
if pol == nil {
|
||||||
|
@ -178,14 +176,9 @@ func (pol *ACLPolicy) CompileFilterRules(
|
||||||
|
|
||||||
var srcIPs []string
|
var srcIPs []string
|
||||||
for srcIndex, src := range acl.Sources {
|
for srcIndex, src := range acl.Sources {
|
||||||
srcs, err := pol.expandSource(src, users, nodes)
|
srcs, err := pol.expandSource(src, nodes)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf(
|
return nil, fmt.Errorf("parsing policy, acl index: %d->%d: %w", index, srcIndex, err)
|
||||||
"parsing policy, acl index: %d->%d: %w",
|
|
||||||
index,
|
|
||||||
srcIndex,
|
|
||||||
err,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
srcIPs = append(srcIPs, srcs...)
|
srcIPs = append(srcIPs, srcs...)
|
||||||
}
|
}
|
||||||
|
@ -204,7 +197,6 @@ func (pol *ACLPolicy) CompileFilterRules(
|
||||||
|
|
||||||
expanded, err := pol.ExpandAlias(
|
expanded, err := pol.ExpandAlias(
|
||||||
nodes,
|
nodes,
|
||||||
users,
|
|
||||||
alias,
|
alias,
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -289,7 +281,6 @@ func ReduceFilterRules(node *types.Node, rules []tailcfg.FilterRule) []tailcfg.F
|
||||||
|
|
||||||
func (pol *ACLPolicy) CompileSSHPolicy(
|
func (pol *ACLPolicy) CompileSSHPolicy(
|
||||||
node *types.Node,
|
node *types.Node,
|
||||||
users []types.User,
|
|
||||||
peers types.Nodes,
|
peers types.Nodes,
|
||||||
) (*tailcfg.SSHPolicy, error) {
|
) (*tailcfg.SSHPolicy, error) {
|
||||||
if pol == nil {
|
if pol == nil {
|
||||||
|
@ -321,7 +312,7 @@ func (pol *ACLPolicy) CompileSSHPolicy(
|
||||||
for index, sshACL := range pol.SSHs {
|
for index, sshACL := range pol.SSHs {
|
||||||
var dest netipx.IPSetBuilder
|
var dest netipx.IPSetBuilder
|
||||||
for _, src := range sshACL.Destinations {
|
for _, src := range sshACL.Destinations {
|
||||||
expanded, err := pol.ExpandAlias(append(peers, node), users, src)
|
expanded, err := pol.ExpandAlias(append(peers, node), src)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -344,21 +335,12 @@ func (pol *ACLPolicy) CompileSSHPolicy(
|
||||||
case "check":
|
case "check":
|
||||||
checkAction, err := sshCheckAction(sshACL.CheckPeriod)
|
checkAction, err := sshCheckAction(sshACL.CheckPeriod)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf(
|
return nil, fmt.Errorf("parsing SSH policy, parsing check duration, index: %d: %w", index, err)
|
||||||
"parsing SSH policy, parsing check duration, index: %d: %w",
|
|
||||||
index,
|
|
||||||
err,
|
|
||||||
)
|
|
||||||
} else {
|
} else {
|
||||||
action = *checkAction
|
action = *checkAction
|
||||||
}
|
}
|
||||||
default:
|
default:
|
||||||
return nil, fmt.Errorf(
|
return nil, fmt.Errorf("parsing SSH policy, unknown action %q, index: %d: %w", sshACL.Action, index, err)
|
||||||
"parsing SSH policy, unknown action %q, index: %d: %w",
|
|
||||||
sshACL.Action,
|
|
||||||
index,
|
|
||||||
err,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
principals := make([]*tailcfg.SSHPrincipal, 0, len(sshACL.Sources))
|
principals := make([]*tailcfg.SSHPrincipal, 0, len(sshACL.Sources))
|
||||||
|
@ -381,7 +363,6 @@ func (pol *ACLPolicy) CompileSSHPolicy(
|
||||||
} else {
|
} else {
|
||||||
expandedSrcs, err := pol.ExpandAlias(
|
expandedSrcs, err := pol.ExpandAlias(
|
||||||
peers,
|
peers,
|
||||||
users,
|
|
||||||
rawSrc,
|
rawSrc,
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -531,10 +512,9 @@ func parseProtocol(protocol string) ([]int, bool, error) {
|
||||||
// with the given src alias.
|
// with the given src alias.
|
||||||
func (pol *ACLPolicy) expandSource(
|
func (pol *ACLPolicy) expandSource(
|
||||||
src string,
|
src string,
|
||||||
users []types.User,
|
|
||||||
nodes types.Nodes,
|
nodes types.Nodes,
|
||||||
) ([]string, error) {
|
) ([]string, error) {
|
||||||
ipSet, err := pol.ExpandAlias(nodes, users, src)
|
ipSet, err := pol.ExpandAlias(nodes, src)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return []string{}, err
|
return []string{}, err
|
||||||
}
|
}
|
||||||
|
@ -558,7 +538,6 @@ func (pol *ACLPolicy) expandSource(
|
||||||
// and transform these in IPAddresses.
|
// and transform these in IPAddresses.
|
||||||
func (pol *ACLPolicy) ExpandAlias(
|
func (pol *ACLPolicy) ExpandAlias(
|
||||||
nodes types.Nodes,
|
nodes types.Nodes,
|
||||||
users []types.User,
|
|
||||||
alias string,
|
alias string,
|
||||||
) (*netipx.IPSet, error) {
|
) (*netipx.IPSet, error) {
|
||||||
if isWildcard(alias) {
|
if isWildcard(alias) {
|
||||||
|
@ -573,12 +552,12 @@ func (pol *ACLPolicy) ExpandAlias(
|
||||||
|
|
||||||
// if alias is a group
|
// if alias is a group
|
||||||
if isGroup(alias) {
|
if isGroup(alias) {
|
||||||
return pol.expandIPsFromGroup(alias, users, nodes)
|
return pol.expandIPsFromGroup(alias, nodes)
|
||||||
}
|
}
|
||||||
|
|
||||||
// if alias is a tag
|
// if alias is a tag
|
||||||
if isTag(alias) {
|
if isTag(alias) {
|
||||||
return pol.expandIPsFromTag(alias, users, nodes)
|
return pol.expandIPsFromTag(alias, nodes)
|
||||||
}
|
}
|
||||||
|
|
||||||
if isAutoGroup(alias) {
|
if isAutoGroup(alias) {
|
||||||
|
@ -586,7 +565,7 @@ func (pol *ACLPolicy) ExpandAlias(
|
||||||
}
|
}
|
||||||
|
|
||||||
// if alias is a user
|
// if alias is a user
|
||||||
if ips, err := pol.expandIPsFromUser(alias, users, nodes); ips != nil {
|
if ips, err := pol.expandIPsFromUser(alias, nodes); ips != nil {
|
||||||
return ips, err
|
return ips, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -595,7 +574,7 @@ func (pol *ACLPolicy) ExpandAlias(
|
||||||
if h, ok := pol.Hosts[alias]; ok {
|
if h, ok := pol.Hosts[alias]; ok {
|
||||||
log.Trace().Str("host", h.String()).Msg("ExpandAlias got hosts entry")
|
log.Trace().Str("host", h.String()).Msg("ExpandAlias got hosts entry")
|
||||||
|
|
||||||
return pol.ExpandAlias(nodes, users, h.String())
|
return pol.ExpandAlias(nodes, h.String())
|
||||||
}
|
}
|
||||||
|
|
||||||
// if alias is an IP
|
// if alias is an IP
|
||||||
|
@ -772,17 +751,16 @@ func (pol *ACLPolicy) expandUsersFromGroup(
|
||||||
|
|
||||||
func (pol *ACLPolicy) expandIPsFromGroup(
|
func (pol *ACLPolicy) expandIPsFromGroup(
|
||||||
group string,
|
group string,
|
||||||
users []types.User,
|
|
||||||
nodes types.Nodes,
|
nodes types.Nodes,
|
||||||
) (*netipx.IPSet, error) {
|
) (*netipx.IPSet, error) {
|
||||||
var build netipx.IPSetBuilder
|
var build netipx.IPSetBuilder
|
||||||
|
|
||||||
userTokens, err := pol.expandUsersFromGroup(group)
|
users, err := pol.expandUsersFromGroup(group)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return &netipx.IPSet{}, err
|
return &netipx.IPSet{}, err
|
||||||
}
|
}
|
||||||
for _, user := range userTokens {
|
for _, user := range users {
|
||||||
filteredNodes := filterNodesByUser(nodes, users, user)
|
filteredNodes := filterNodesByUser(nodes, user)
|
||||||
for _, node := range filteredNodes {
|
for _, node := range filteredNodes {
|
||||||
node.AppendToIPSet(&build)
|
node.AppendToIPSet(&build)
|
||||||
}
|
}
|
||||||
|
@ -793,7 +771,6 @@ func (pol *ACLPolicy) expandIPsFromGroup(
|
||||||
|
|
||||||
func (pol *ACLPolicy) expandIPsFromTag(
|
func (pol *ACLPolicy) expandIPsFromTag(
|
||||||
alias string,
|
alias string,
|
||||||
users []types.User,
|
|
||||||
nodes types.Nodes,
|
nodes types.Nodes,
|
||||||
) (*netipx.IPSet, error) {
|
) (*netipx.IPSet, error) {
|
||||||
var build netipx.IPSetBuilder
|
var build netipx.IPSetBuilder
|
||||||
|
@ -826,7 +803,7 @@ func (pol *ACLPolicy) expandIPsFromTag(
|
||||||
|
|
||||||
// filter out nodes per tag owner
|
// filter out nodes per tag owner
|
||||||
for _, user := range owners {
|
for _, user := range owners {
|
||||||
nodes := filterNodesByUser(nodes, users, user)
|
nodes := filterNodesByUser(nodes, user)
|
||||||
for _, node := range nodes {
|
for _, node := range nodes {
|
||||||
if node.Hostinfo == nil {
|
if node.Hostinfo == nil {
|
||||||
continue
|
continue
|
||||||
|
@ -843,12 +820,11 @@ func (pol *ACLPolicy) expandIPsFromTag(
|
||||||
|
|
||||||
func (pol *ACLPolicy) expandIPsFromUser(
|
func (pol *ACLPolicy) expandIPsFromUser(
|
||||||
user string,
|
user string,
|
||||||
users []types.User,
|
|
||||||
nodes types.Nodes,
|
nodes types.Nodes,
|
||||||
) (*netipx.IPSet, error) {
|
) (*netipx.IPSet, error) {
|
||||||
var build netipx.IPSetBuilder
|
var build netipx.IPSetBuilder
|
||||||
|
|
||||||
filteredNodes := filterNodesByUser(nodes, users, user)
|
filteredNodes := filterNodesByUser(nodes, user)
|
||||||
filteredNodes = excludeCorrectlyTaggedNodes(pol, filteredNodes, user)
|
filteredNodes = excludeCorrectlyTaggedNodes(pol, filteredNodes, user)
|
||||||
|
|
||||||
// shortcurcuit if we have no nodes to get ips from.
|
// shortcurcuit if we have no nodes to get ips from.
|
||||||
|
@ -977,43 +953,10 @@ func (pol *ACLPolicy) TagsOfNode(
|
||||||
return validTags, invalidTags
|
return validTags, invalidTags
|
||||||
}
|
}
|
||||||
|
|
||||||
// filterNodesByUser returns a list of nodes that match the given userToken from a
|
func filterNodesByUser(nodes types.Nodes, user string) types.Nodes {
|
||||||
// policy.
|
|
||||||
// Matching nodes are determined by first matching the user token to a user by checking:
|
|
||||||
// - If it is an ID that mactches the user database ID
|
|
||||||
// - It is the Provider Identifier from OIDC
|
|
||||||
// - It matches the username or email of a user
|
|
||||||
//
|
|
||||||
// If the token matches more than one user, zero nodes will returned.
|
|
||||||
func filterNodesByUser(nodes types.Nodes, users []types.User, userToken string) types.Nodes {
|
|
||||||
var out types.Nodes
|
var out types.Nodes
|
||||||
|
|
||||||
var potentialUsers []types.User
|
|
||||||
for _, user := range users {
|
|
||||||
if user.ProviderIdentifier.Valid && user.ProviderIdentifier.String == userToken {
|
|
||||||
// If a user is matching with a known unique field,
|
|
||||||
// disgard all other users and only keep the current
|
|
||||||
// user.
|
|
||||||
potentialUsers = []types.User{user}
|
|
||||||
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if user.Email == userToken {
|
|
||||||
potentialUsers = append(potentialUsers, user)
|
|
||||||
}
|
|
||||||
if user.Name == userToken {
|
|
||||||
potentialUsers = append(potentialUsers, user)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(potentialUsers) != 1 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
user := potentialUsers[0]
|
|
||||||
|
|
||||||
for _, node := range nodes {
|
for _, node := range nodes {
|
||||||
if node.User.ID == user.ID {
|
if node.User.Username() == user {
|
||||||
out = append(out, node)
|
out = append(out, node)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1034,7 +977,10 @@ func FilterNodesByACL(
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
|
log.Printf("Checking if %s can access %s", node.Hostname, peer.Hostname)
|
||||||
|
|
||||||
if node.CanAccess(filter, nodes[index]) || peer.CanAccess(filter, node) {
|
if node.CanAccess(filter, nodes[index]) || peer.CanAccess(filter, node) {
|
||||||
|
log.Printf("CAN ACCESS %s can access %s", node.Hostname, peer.Hostname)
|
||||||
result = append(result, peer)
|
result = append(result, peer)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,12 +1,9 @@
|
||||||
package policy
|
package policy
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"database/sql"
|
|
||||||
"errors"
|
"errors"
|
||||||
"math/rand/v2"
|
|
||||||
"net/netip"
|
"net/netip"
|
||||||
"slices"
|
"slices"
|
||||||
"sort"
|
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/google/go-cmp/cmp"
|
"github.com/google/go-cmp/cmp"
|
||||||
|
@ -17,7 +14,6 @@ import (
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
"go4.org/netipx"
|
"go4.org/netipx"
|
||||||
"gopkg.in/check.v1"
|
"gopkg.in/check.v1"
|
||||||
"gorm.io/gorm"
|
|
||||||
"tailscale.com/net/tsaddr"
|
"tailscale.com/net/tsaddr"
|
||||||
"tailscale.com/tailcfg"
|
"tailscale.com/tailcfg"
|
||||||
)
|
)
|
||||||
|
@ -379,21 +375,15 @@ func TestParsing(t *testing.T) {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
user := types.User{
|
rules, err := pol.CompileFilterRules(types.Nodes{
|
||||||
Model: gorm.Model{ID: 1},
|
|
||||||
Name: "testuser",
|
|
||||||
}
|
|
||||||
rules, err := pol.CompileFilterRules(
|
|
||||||
[]types.User{
|
|
||||||
user,
|
|
||||||
},
|
|
||||||
types.Nodes{
|
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.100.100.100"),
|
IPv4: iap("100.100.100.100"),
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("200.200.200.200"),
|
IPv4: iap("200.200.200.200"),
|
||||||
User: user,
|
User: types.User{
|
||||||
|
Name: "testuser",
|
||||||
|
},
|
||||||
Hostinfo: &tailcfg.Hostinfo{},
|
Hostinfo: &tailcfg.Hostinfo{},
|
||||||
},
|
},
|
||||||
})
|
})
|
||||||
|
@ -543,7 +533,7 @@ func (s *Suite) TestRuleInvalidGeneration(c *check.C) {
|
||||||
c.Assert(pol.ACLs, check.HasLen, 6)
|
c.Assert(pol.ACLs, check.HasLen, 6)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
rules, err := pol.CompileFilterRules([]types.User{}, types.Nodes{})
|
rules, err := pol.CompileFilterRules(types.Nodes{})
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
c.Assert(rules, check.IsNil)
|
c.Assert(rules, check.IsNil)
|
||||||
}
|
}
|
||||||
|
@ -559,12 +549,7 @@ func (s *Suite) TestInvalidAction(c *check.C) {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
_, _, err := GenerateFilterAndSSHRulesForTests(
|
_, _, err := GenerateFilterAndSSHRulesForTests(pol, &types.Node{}, types.Nodes{})
|
||||||
pol,
|
|
||||||
&types.Node{},
|
|
||||||
types.Nodes{},
|
|
||||||
[]types.User{},
|
|
||||||
)
|
|
||||||
c.Assert(errors.Is(err, ErrInvalidAction), check.Equals, true)
|
c.Assert(errors.Is(err, ErrInvalidAction), check.Equals, true)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -583,12 +568,7 @@ func (s *Suite) TestInvalidGroupInGroup(c *check.C) {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
_, _, err := GenerateFilterAndSSHRulesForTests(
|
_, _, err := GenerateFilterAndSSHRulesForTests(pol, &types.Node{}, types.Nodes{})
|
||||||
pol,
|
|
||||||
&types.Node{},
|
|
||||||
types.Nodes{},
|
|
||||||
[]types.User{},
|
|
||||||
)
|
|
||||||
c.Assert(errors.Is(err, ErrInvalidGroup), check.Equals, true)
|
c.Assert(errors.Is(err, ErrInvalidGroup), check.Equals, true)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -604,12 +584,7 @@ func (s *Suite) TestInvalidTagOwners(c *check.C) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
_, _, err := GenerateFilterAndSSHRulesForTests(
|
_, _, err := GenerateFilterAndSSHRulesForTests(pol, &types.Node{}, types.Nodes{})
|
||||||
pol,
|
|
||||||
&types.Node{},
|
|
||||||
types.Nodes{},
|
|
||||||
[]types.User{},
|
|
||||||
)
|
|
||||||
c.Assert(errors.Is(err, ErrInvalidTag), check.Equals, true)
|
c.Assert(errors.Is(err, ErrInvalidTag), check.Equals, true)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -885,25 +860,7 @@ func Test_expandPorts(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func Test_filterNodesByUser(t *testing.T) {
|
func Test_listNodesInUser(t *testing.T) {
|
||||||
users := []types.User{
|
|
||||||
{Model: gorm.Model{ID: 1}, Name: "marc"},
|
|
||||||
{Model: gorm.Model{ID: 2}, Name: "joe", Email: "joe@headscale.net"},
|
|
||||||
{
|
|
||||||
Model: gorm.Model{ID: 3},
|
|
||||||
Name: "mikael",
|
|
||||||
Email: "mikael@headscale.net",
|
|
||||||
ProviderIdentifier: sql.NullString{String: "http://oidc.org/1234", Valid: true},
|
|
||||||
},
|
|
||||||
{Model: gorm.Model{ID: 4}, Name: "mikael2", Email: "mikael@headscale.net"},
|
|
||||||
{Model: gorm.Model{ID: 5}, Name: "mikael", Email: "mikael2@headscale.net"},
|
|
||||||
{Model: gorm.Model{ID: 6}, Name: "http://oidc.org/1234", Email: "mikael@headscale.net"},
|
|
||||||
{Model: gorm.Model{ID: 7}, Name: "1"},
|
|
||||||
{Model: gorm.Model{ID: 8}, Name: "alex", Email: "alex@headscale.net"},
|
|
||||||
{Model: gorm.Model{ID: 9}, Name: "alex@headscale.net"},
|
|
||||||
{Model: gorm.Model{ID: 10}, Email: "http://oidc.org/1234"},
|
|
||||||
}
|
|
||||||
|
|
||||||
type args struct {
|
type args struct {
|
||||||
nodes types.Nodes
|
nodes types.Nodes
|
||||||
user string
|
user string
|
||||||
|
@ -917,258 +874,50 @@ func Test_filterNodesByUser(t *testing.T) {
|
||||||
name: "1 node in user",
|
name: "1 node in user",
|
||||||
args: args{
|
args: args{
|
||||||
nodes: types.Nodes{
|
nodes: types.Nodes{
|
||||||
&types.Node{User: users[1]},
|
&types.Node{User: types.User{Name: "joe"}},
|
||||||
},
|
},
|
||||||
user: "joe",
|
user: "joe",
|
||||||
},
|
},
|
||||||
want: types.Nodes{
|
want: types.Nodes{
|
||||||
&types.Node{User: users[1]},
|
&types.Node{User: types.User{Name: "joe"}},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "3 nodes, 2 in user",
|
name: "3 nodes, 2 in user",
|
||||||
args: args{
|
args: args{
|
||||||
nodes: types.Nodes{
|
nodes: types.Nodes{
|
||||||
&types.Node{ID: 1, User: users[1]},
|
&types.Node{ID: 1, User: types.User{Name: "joe"}},
|
||||||
&types.Node{ID: 2, User: users[0]},
|
&types.Node{ID: 2, User: types.User{Name: "marc"}},
|
||||||
&types.Node{ID: 3, User: users[0]},
|
&types.Node{ID: 3, User: types.User{Name: "marc"}},
|
||||||
},
|
},
|
||||||
user: "marc",
|
user: "marc",
|
||||||
},
|
},
|
||||||
want: types.Nodes{
|
want: types.Nodes{
|
||||||
&types.Node{ID: 2, User: users[0]},
|
&types.Node{ID: 2, User: types.User{Name: "marc"}},
|
||||||
&types.Node{ID: 3, User: users[0]},
|
&types.Node{ID: 3, User: types.User{Name: "marc"}},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "5 nodes, 0 in user",
|
name: "5 nodes, 0 in user",
|
||||||
args: args{
|
args: args{
|
||||||
nodes: types.Nodes{
|
nodes: types.Nodes{
|
||||||
&types.Node{ID: 1, User: users[1]},
|
&types.Node{ID: 1, User: types.User{Name: "joe"}},
|
||||||
&types.Node{ID: 2, User: users[0]},
|
&types.Node{ID: 2, User: types.User{Name: "marc"}},
|
||||||
&types.Node{ID: 3, User: users[0]},
|
&types.Node{ID: 3, User: types.User{Name: "marc"}},
|
||||||
&types.Node{ID: 4, User: users[0]},
|
&types.Node{ID: 4, User: types.User{Name: "marc"}},
|
||||||
&types.Node{ID: 5, User: users[0]},
|
&types.Node{ID: 5, User: types.User{Name: "marc"}},
|
||||||
},
|
},
|
||||||
user: "mickael",
|
user: "mickael",
|
||||||
},
|
},
|
||||||
want: nil,
|
want: nil,
|
||||||
},
|
},
|
||||||
{
|
|
||||||
name: "match-by-provider-ident",
|
|
||||||
args: args{
|
|
||||||
nodes: types.Nodes{
|
|
||||||
&types.Node{ID: 1, User: users[1]},
|
|
||||||
&types.Node{ID: 2, User: users[2]},
|
|
||||||
},
|
|
||||||
user: "http://oidc.org/1234",
|
|
||||||
},
|
|
||||||
want: types.Nodes{
|
|
||||||
&types.Node{ID: 2, User: users[2]},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "match-by-email",
|
|
||||||
args: args{
|
|
||||||
nodes: types.Nodes{
|
|
||||||
&types.Node{ID: 1, User: users[1]},
|
|
||||||
&types.Node{ID: 2, User: users[2]},
|
|
||||||
&types.Node{ID: 8, User: users[7]},
|
|
||||||
},
|
|
||||||
user: "joe@headscale.net",
|
|
||||||
},
|
|
||||||
want: types.Nodes{
|
|
||||||
&types.Node{ID: 1, User: users[1]},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "multi-match-is-zero",
|
|
||||||
args: args{
|
|
||||||
nodes: types.Nodes{
|
|
||||||
&types.Node{ID: 1, User: users[1]},
|
|
||||||
&types.Node{ID: 2, User: users[2]},
|
|
||||||
&types.Node{ID: 3, User: users[3]},
|
|
||||||
},
|
|
||||||
user: "mikael@headscale.net",
|
|
||||||
},
|
|
||||||
want: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "multi-email-first-match-is-zero",
|
|
||||||
args: args{
|
|
||||||
nodes: types.Nodes{
|
|
||||||
// First match email, then provider id
|
|
||||||
&types.Node{ID: 3, User: users[3]},
|
|
||||||
&types.Node{ID: 2, User: users[2]},
|
|
||||||
},
|
|
||||||
user: "mikael@headscale.net",
|
|
||||||
},
|
|
||||||
want: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "multi-username-first-match-is-zero",
|
|
||||||
args: args{
|
|
||||||
nodes: types.Nodes{
|
|
||||||
// First match username, then provider id
|
|
||||||
&types.Node{ID: 4, User: users[3]},
|
|
||||||
&types.Node{ID: 2, User: users[2]},
|
|
||||||
},
|
|
||||||
user: "mikael",
|
|
||||||
},
|
|
||||||
want: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "all-users-duplicate-username-random-order",
|
|
||||||
args: args{
|
|
||||||
nodes: types.Nodes{
|
|
||||||
&types.Node{ID: 1, User: users[0]},
|
|
||||||
&types.Node{ID: 2, User: users[1]},
|
|
||||||
&types.Node{ID: 3, User: users[2]},
|
|
||||||
&types.Node{ID: 4, User: users[3]},
|
|
||||||
&types.Node{ID: 5, User: users[4]},
|
|
||||||
},
|
|
||||||
user: "mikael",
|
|
||||||
},
|
|
||||||
want: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "all-users-unique-username-random-order",
|
|
||||||
args: args{
|
|
||||||
nodes: types.Nodes{
|
|
||||||
&types.Node{ID: 1, User: users[0]},
|
|
||||||
&types.Node{ID: 2, User: users[1]},
|
|
||||||
&types.Node{ID: 3, User: users[2]},
|
|
||||||
&types.Node{ID: 4, User: users[3]},
|
|
||||||
&types.Node{ID: 5, User: users[4]},
|
|
||||||
},
|
|
||||||
user: "marc",
|
|
||||||
},
|
|
||||||
want: types.Nodes{
|
|
||||||
&types.Node{ID: 1, User: users[0]},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "all-users-no-username-random-order",
|
|
||||||
args: args{
|
|
||||||
nodes: types.Nodes{
|
|
||||||
&types.Node{ID: 1, User: users[0]},
|
|
||||||
&types.Node{ID: 2, User: users[1]},
|
|
||||||
&types.Node{ID: 3, User: users[2]},
|
|
||||||
&types.Node{ID: 4, User: users[3]},
|
|
||||||
&types.Node{ID: 5, User: users[4]},
|
|
||||||
},
|
|
||||||
user: "not-working",
|
|
||||||
},
|
|
||||||
want: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "all-users-duplicate-email-random-order",
|
|
||||||
args: args{
|
|
||||||
nodes: types.Nodes{
|
|
||||||
&types.Node{ID: 1, User: users[0]},
|
|
||||||
&types.Node{ID: 2, User: users[1]},
|
|
||||||
&types.Node{ID: 3, User: users[2]},
|
|
||||||
&types.Node{ID: 4, User: users[3]},
|
|
||||||
&types.Node{ID: 5, User: users[4]},
|
|
||||||
},
|
|
||||||
user: "mikael@headscale.net",
|
|
||||||
},
|
|
||||||
want: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "all-users-duplicate-email-random-order",
|
|
||||||
args: args{
|
|
||||||
nodes: types.Nodes{
|
|
||||||
&types.Node{ID: 1, User: users[0]},
|
|
||||||
&types.Node{ID: 2, User: users[1]},
|
|
||||||
&types.Node{ID: 3, User: users[2]},
|
|
||||||
&types.Node{ID: 4, User: users[3]},
|
|
||||||
&types.Node{ID: 5, User: users[4]},
|
|
||||||
&types.Node{ID: 8, User: users[7]},
|
|
||||||
},
|
|
||||||
user: "joe@headscale.net",
|
|
||||||
},
|
|
||||||
want: types.Nodes{
|
|
||||||
&types.Node{ID: 2, User: users[1]},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "email-as-username-duplicate",
|
|
||||||
args: args{
|
|
||||||
nodes: types.Nodes{
|
|
||||||
&types.Node{ID: 1, User: users[7]},
|
|
||||||
&types.Node{ID: 2, User: users[8]},
|
|
||||||
},
|
|
||||||
user: "alex@headscale.net",
|
|
||||||
},
|
|
||||||
want: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "all-users-no-email-random-order",
|
|
||||||
args: args{
|
|
||||||
nodes: types.Nodes{
|
|
||||||
&types.Node{ID: 1, User: users[0]},
|
|
||||||
&types.Node{ID: 2, User: users[1]},
|
|
||||||
&types.Node{ID: 3, User: users[2]},
|
|
||||||
&types.Node{ID: 4, User: users[3]},
|
|
||||||
&types.Node{ID: 5, User: users[4]},
|
|
||||||
},
|
|
||||||
user: "not-working@headscale.net",
|
|
||||||
},
|
|
||||||
want: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "all-users-provider-id-random-order",
|
|
||||||
args: args{
|
|
||||||
nodes: types.Nodes{
|
|
||||||
&types.Node{ID: 1, User: users[0]},
|
|
||||||
&types.Node{ID: 2, User: users[1]},
|
|
||||||
&types.Node{ID: 3, User: users[2]},
|
|
||||||
&types.Node{ID: 4, User: users[3]},
|
|
||||||
&types.Node{ID: 5, User: users[4]},
|
|
||||||
&types.Node{ID: 6, User: users[5]},
|
|
||||||
},
|
|
||||||
user: "http://oidc.org/1234",
|
|
||||||
},
|
|
||||||
want: types.Nodes{
|
|
||||||
&types.Node{ID: 3, User: users[2]},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "all-users-no-provider-id-random-order",
|
|
||||||
args: args{
|
|
||||||
nodes: types.Nodes{
|
|
||||||
&types.Node{ID: 1, User: users[0]},
|
|
||||||
&types.Node{ID: 2, User: users[1]},
|
|
||||||
&types.Node{ID: 3, User: users[2]},
|
|
||||||
&types.Node{ID: 4, User: users[3]},
|
|
||||||
&types.Node{ID: 5, User: users[4]},
|
|
||||||
&types.Node{ID: 6, User: users[5]},
|
|
||||||
},
|
|
||||||
user: "http://oidc.org/4321",
|
|
||||||
},
|
|
||||||
want: nil,
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
for _, test := range tests {
|
for _, test := range tests {
|
||||||
t.Run(test.name, func(t *testing.T) {
|
t.Run(test.name, func(t *testing.T) {
|
||||||
for range 1000 {
|
got := filterNodesByUser(test.args.nodes, test.args.user)
|
||||||
ns := test.args.nodes
|
|
||||||
rand.Shuffle(len(ns), func(i, j int) {
|
|
||||||
ns[i], ns[j] = ns[j], ns[i]
|
|
||||||
})
|
|
||||||
us := users
|
|
||||||
rand.Shuffle(len(us), func(i, j int) {
|
|
||||||
us[i], us[j] = us[j], us[i]
|
|
||||||
})
|
|
||||||
got := filterNodesByUser(ns, us, test.args.user)
|
|
||||||
sort.Slice(got, func(i, j int) bool {
|
|
||||||
return got[i].ID < got[j].ID
|
|
||||||
})
|
|
||||||
|
|
||||||
if diff := cmp.Diff(test.want, got, util.Comparers...); diff != "" {
|
if diff := cmp.Diff(test.want, got, util.Comparers...); diff != "" {
|
||||||
t.Errorf("filterNodesByUser() = (-want +got):\n%s", diff)
|
t.Errorf("listNodesInUser() = (-want +got):\n%s", diff)
|
||||||
}
|
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
@ -1191,12 +940,6 @@ func Test_expandAlias(t *testing.T) {
|
||||||
return s
|
return s
|
||||||
}
|
}
|
||||||
|
|
||||||
users := []types.User{
|
|
||||||
{Model: gorm.Model{ID: 1}, Name: "joe"},
|
|
||||||
{Model: gorm.Model{ID: 2}, Name: "marc"},
|
|
||||||
{Model: gorm.Model{ID: 3}, Name: "mickael"},
|
|
||||||
}
|
|
||||||
|
|
||||||
type field struct {
|
type field struct {
|
||||||
pol ACLPolicy
|
pol ACLPolicy
|
||||||
}
|
}
|
||||||
|
@ -1246,19 +989,19 @@ func Test_expandAlias(t *testing.T) {
|
||||||
nodes: types.Nodes{
|
nodes: types.Nodes{
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
User: users[0],
|
User: types.User{Name: "joe"},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.2"),
|
IPv4: iap("100.64.0.2"),
|
||||||
User: users[0],
|
User: types.User{Name: "joe"},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.3"),
|
IPv4: iap("100.64.0.3"),
|
||||||
User: users[1],
|
User: types.User{Name: "marc"},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.4"),
|
IPv4: iap("100.64.0.4"),
|
||||||
User: users[2],
|
User: types.User{Name: "mickael"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
@ -1279,19 +1022,19 @@ func Test_expandAlias(t *testing.T) {
|
||||||
nodes: types.Nodes{
|
nodes: types.Nodes{
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
User: users[0],
|
User: types.User{Name: "joe"},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.2"),
|
IPv4: iap("100.64.0.2"),
|
||||||
User: users[0],
|
User: types.User{Name: "joe"},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.3"),
|
IPv4: iap("100.64.0.3"),
|
||||||
User: users[1],
|
User: types.User{Name: "marc"},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.4"),
|
IPv4: iap("100.64.0.4"),
|
||||||
User: users[2],
|
User: types.User{Name: "mickael"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
@ -1442,7 +1185,7 @@ func Test_expandAlias(t *testing.T) {
|
||||||
nodes: types.Nodes{
|
nodes: types.Nodes{
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
User: users[0],
|
User: types.User{Name: "joe"},
|
||||||
Hostinfo: &tailcfg.Hostinfo{
|
Hostinfo: &tailcfg.Hostinfo{
|
||||||
OS: "centos",
|
OS: "centos",
|
||||||
Hostname: "foo",
|
Hostname: "foo",
|
||||||
|
@ -1451,7 +1194,7 @@ func Test_expandAlias(t *testing.T) {
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.2"),
|
IPv4: iap("100.64.0.2"),
|
||||||
User: users[0],
|
User: types.User{Name: "joe"},
|
||||||
Hostinfo: &tailcfg.Hostinfo{
|
Hostinfo: &tailcfg.Hostinfo{
|
||||||
OS: "centos",
|
OS: "centos",
|
||||||
Hostname: "foo",
|
Hostname: "foo",
|
||||||
|
@ -1460,11 +1203,11 @@ func Test_expandAlias(t *testing.T) {
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.3"),
|
IPv4: iap("100.64.0.3"),
|
||||||
User: users[1],
|
User: types.User{Name: "marc"},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.4"),
|
IPv4: iap("100.64.0.4"),
|
||||||
User: users[0],
|
User: types.User{Name: "joe"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
@ -1517,21 +1260,21 @@ func Test_expandAlias(t *testing.T) {
|
||||||
nodes: types.Nodes{
|
nodes: types.Nodes{
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
User: users[0],
|
User: types.User{Name: "joe"},
|
||||||
ForcedTags: []string{"tag:hr-webserver"},
|
ForcedTags: []string{"tag:hr-webserver"},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.2"),
|
IPv4: iap("100.64.0.2"),
|
||||||
User: users[0],
|
User: types.User{Name: "joe"},
|
||||||
ForcedTags: []string{"tag:hr-webserver"},
|
ForcedTags: []string{"tag:hr-webserver"},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.3"),
|
IPv4: iap("100.64.0.3"),
|
||||||
User: users[1],
|
User: types.User{Name: "marc"},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.4"),
|
IPv4: iap("100.64.0.4"),
|
||||||
User: users[2],
|
User: types.User{Name: "mickael"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
@ -1552,12 +1295,12 @@ func Test_expandAlias(t *testing.T) {
|
||||||
nodes: types.Nodes{
|
nodes: types.Nodes{
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
User: users[0],
|
User: types.User{Name: "joe"},
|
||||||
ForcedTags: []string{"tag:hr-webserver"},
|
ForcedTags: []string{"tag:hr-webserver"},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.2"),
|
IPv4: iap("100.64.0.2"),
|
||||||
User: users[0],
|
User: types.User{Name: "joe"},
|
||||||
Hostinfo: &tailcfg.Hostinfo{
|
Hostinfo: &tailcfg.Hostinfo{
|
||||||
OS: "centos",
|
OS: "centos",
|
||||||
Hostname: "foo",
|
Hostname: "foo",
|
||||||
|
@ -1566,11 +1309,11 @@ func Test_expandAlias(t *testing.T) {
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.3"),
|
IPv4: iap("100.64.0.3"),
|
||||||
User: users[1],
|
User: types.User{Name: "marc"},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.4"),
|
IPv4: iap("100.64.0.4"),
|
||||||
User: users[2],
|
User: types.User{Name: "mickael"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
@ -1607,12 +1350,12 @@ func Test_expandAlias(t *testing.T) {
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.3"),
|
IPv4: iap("100.64.0.3"),
|
||||||
User: users[1],
|
User: types.User{Name: "marc"},
|
||||||
Hostinfo: &tailcfg.Hostinfo{},
|
Hostinfo: &tailcfg.Hostinfo{},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.4"),
|
IPv4: iap("100.64.0.4"),
|
||||||
User: users[0],
|
User: types.User{Name: "joe"},
|
||||||
Hostinfo: &tailcfg.Hostinfo{},
|
Hostinfo: &tailcfg.Hostinfo{},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
@ -1625,7 +1368,6 @@ func Test_expandAlias(t *testing.T) {
|
||||||
t.Run(test.name, func(t *testing.T) {
|
t.Run(test.name, func(t *testing.T) {
|
||||||
got, err := test.field.pol.ExpandAlias(
|
got, err := test.field.pol.ExpandAlias(
|
||||||
test.args.nodes,
|
test.args.nodes,
|
||||||
users,
|
|
||||||
test.args.alias,
|
test.args.alias,
|
||||||
)
|
)
|
||||||
if (err != nil) != test.wantErr {
|
if (err != nil) != test.wantErr {
|
||||||
|
@ -1973,7 +1715,6 @@ func TestACLPolicy_generateFilterRules(t *testing.T) {
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
got, err := tt.field.pol.CompileFilterRules(
|
got, err := tt.field.pol.CompileFilterRules(
|
||||||
[]types.User{},
|
|
||||||
tt.args.nodes,
|
tt.args.nodes,
|
||||||
)
|
)
|
||||||
if (err != nil) != tt.wantErr {
|
if (err != nil) != tt.wantErr {
|
||||||
|
@ -2101,13 +1842,6 @@ func TestTheInternet(t *testing.T) {
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestReduceFilterRules(t *testing.T) {
|
func TestReduceFilterRules(t *testing.T) {
|
||||||
users := []types.User{
|
|
||||||
{Model: gorm.Model{ID: 1}, Name: "mickael"},
|
|
||||||
{Model: gorm.Model{ID: 2}, Name: "user1"},
|
|
||||||
{Model: gorm.Model{ID: 3}, Name: "user2"},
|
|
||||||
{Model: gorm.Model{ID: 4}, Name: "user100"},
|
|
||||||
}
|
|
||||||
|
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
node *types.Node
|
node *types.Node
|
||||||
|
@ -2129,13 +1863,13 @@ func TestReduceFilterRules(t *testing.T) {
|
||||||
node: &types.Node{
|
node: &types.Node{
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
IPv6: iap("fd7a:115c:a1e0:ab12:4843:2222:6273:2221"),
|
IPv6: iap("fd7a:115c:a1e0:ab12:4843:2222:6273:2221"),
|
||||||
User: users[0],
|
User: types.User{Name: "mickael"},
|
||||||
},
|
},
|
||||||
peers: types.Nodes{
|
peers: types.Nodes{
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.2"),
|
IPv4: iap("100.64.0.2"),
|
||||||
IPv6: iap("fd7a:115c:a1e0:ab12:4843:2222:6273:2222"),
|
IPv6: iap("fd7a:115c:a1e0:ab12:4843:2222:6273:2222"),
|
||||||
User: users[0],
|
User: types.User{Name: "mickael"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
want: []tailcfg.FilterRule{},
|
want: []tailcfg.FilterRule{},
|
||||||
|
@ -2162,7 +1896,7 @@ func TestReduceFilterRules(t *testing.T) {
|
||||||
node: &types.Node{
|
node: &types.Node{
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::1"),
|
IPv6: iap("fd7a:115c:a1e0::1"),
|
||||||
User: users[1],
|
User: types.User{Name: "user1"},
|
||||||
Hostinfo: &tailcfg.Hostinfo{
|
Hostinfo: &tailcfg.Hostinfo{
|
||||||
RoutableIPs: []netip.Prefix{
|
RoutableIPs: []netip.Prefix{
|
||||||
netip.MustParsePrefix("10.33.0.0/16"),
|
netip.MustParsePrefix("10.33.0.0/16"),
|
||||||
|
@ -2173,7 +1907,7 @@ func TestReduceFilterRules(t *testing.T) {
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.2"),
|
IPv4: iap("100.64.0.2"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::2"),
|
IPv6: iap("fd7a:115c:a1e0::2"),
|
||||||
User: users[1],
|
User: types.User{Name: "user1"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
want: []tailcfg.FilterRule{
|
want: []tailcfg.FilterRule{
|
||||||
|
@ -2241,19 +1975,19 @@ func TestReduceFilterRules(t *testing.T) {
|
||||||
node: &types.Node{
|
node: &types.Node{
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::1"),
|
IPv6: iap("fd7a:115c:a1e0::1"),
|
||||||
User: users[1],
|
User: types.User{Name: "user1"},
|
||||||
},
|
},
|
||||||
peers: types.Nodes{
|
peers: types.Nodes{
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.2"),
|
IPv4: iap("100.64.0.2"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::2"),
|
IPv6: iap("fd7a:115c:a1e0::2"),
|
||||||
User: users[2],
|
User: types.User{Name: "user2"},
|
||||||
},
|
},
|
||||||
// "internal" exit node
|
// "internal" exit node
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.100"),
|
IPv4: iap("100.64.0.100"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::100"),
|
IPv6: iap("fd7a:115c:a1e0::100"),
|
||||||
User: users[3],
|
User: types.User{Name: "user100"},
|
||||||
Hostinfo: &tailcfg.Hostinfo{
|
Hostinfo: &tailcfg.Hostinfo{
|
||||||
RoutableIPs: tsaddr.ExitRoutes(),
|
RoutableIPs: tsaddr.ExitRoutes(),
|
||||||
},
|
},
|
||||||
|
@ -2300,12 +2034,12 @@ func TestReduceFilterRules(t *testing.T) {
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.2"),
|
IPv4: iap("100.64.0.2"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::2"),
|
IPv6: iap("fd7a:115c:a1e0::2"),
|
||||||
User: users[2],
|
User: types.User{Name: "user2"},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::1"),
|
IPv6: iap("fd7a:115c:a1e0::1"),
|
||||||
User: users[1],
|
User: types.User{Name: "user1"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
want: []tailcfg.FilterRule{
|
want: []tailcfg.FilterRule{
|
||||||
|
@ -2397,7 +2131,7 @@ func TestReduceFilterRules(t *testing.T) {
|
||||||
node: &types.Node{
|
node: &types.Node{
|
||||||
IPv4: iap("100.64.0.100"),
|
IPv4: iap("100.64.0.100"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::100"),
|
IPv6: iap("fd7a:115c:a1e0::100"),
|
||||||
User: users[3],
|
User: types.User{Name: "user100"},
|
||||||
Hostinfo: &tailcfg.Hostinfo{
|
Hostinfo: &tailcfg.Hostinfo{
|
||||||
RoutableIPs: tsaddr.ExitRoutes(),
|
RoutableIPs: tsaddr.ExitRoutes(),
|
||||||
},
|
},
|
||||||
|
@ -2406,12 +2140,12 @@ func TestReduceFilterRules(t *testing.T) {
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.2"),
|
IPv4: iap("100.64.0.2"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::2"),
|
IPv6: iap("fd7a:115c:a1e0::2"),
|
||||||
User: users[2],
|
User: types.User{Name: "user2"},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::1"),
|
IPv6: iap("fd7a:115c:a1e0::1"),
|
||||||
User: users[1],
|
User: types.User{Name: "user1"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
want: []tailcfg.FilterRule{
|
want: []tailcfg.FilterRule{
|
||||||
|
@ -2509,7 +2243,7 @@ func TestReduceFilterRules(t *testing.T) {
|
||||||
node: &types.Node{
|
node: &types.Node{
|
||||||
IPv4: iap("100.64.0.100"),
|
IPv4: iap("100.64.0.100"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::100"),
|
IPv6: iap("fd7a:115c:a1e0::100"),
|
||||||
User: users[3],
|
User: types.User{Name: "user100"},
|
||||||
Hostinfo: &tailcfg.Hostinfo{
|
Hostinfo: &tailcfg.Hostinfo{
|
||||||
RoutableIPs: []netip.Prefix{
|
RoutableIPs: []netip.Prefix{
|
||||||
netip.MustParsePrefix("8.0.0.0/16"),
|
netip.MustParsePrefix("8.0.0.0/16"),
|
||||||
|
@ -2521,12 +2255,12 @@ func TestReduceFilterRules(t *testing.T) {
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.2"),
|
IPv4: iap("100.64.0.2"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::2"),
|
IPv6: iap("fd7a:115c:a1e0::2"),
|
||||||
User: users[2],
|
User: types.User{Name: "user2"},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::1"),
|
IPv6: iap("fd7a:115c:a1e0::1"),
|
||||||
User: users[1],
|
User: types.User{Name: "user1"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
want: []tailcfg.FilterRule{
|
want: []tailcfg.FilterRule{
|
||||||
|
@ -2599,7 +2333,7 @@ func TestReduceFilterRules(t *testing.T) {
|
||||||
node: &types.Node{
|
node: &types.Node{
|
||||||
IPv4: iap("100.64.0.100"),
|
IPv4: iap("100.64.0.100"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::100"),
|
IPv6: iap("fd7a:115c:a1e0::100"),
|
||||||
User: users[3],
|
User: types.User{Name: "user100"},
|
||||||
Hostinfo: &tailcfg.Hostinfo{
|
Hostinfo: &tailcfg.Hostinfo{
|
||||||
RoutableIPs: []netip.Prefix{
|
RoutableIPs: []netip.Prefix{
|
||||||
netip.MustParsePrefix("8.0.0.0/8"),
|
netip.MustParsePrefix("8.0.0.0/8"),
|
||||||
|
@ -2611,12 +2345,12 @@ func TestReduceFilterRules(t *testing.T) {
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.2"),
|
IPv4: iap("100.64.0.2"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::2"),
|
IPv6: iap("fd7a:115c:a1e0::2"),
|
||||||
User: users[2],
|
User: types.User{Name: "user2"},
|
||||||
},
|
},
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::1"),
|
IPv6: iap("fd7a:115c:a1e0::1"),
|
||||||
User: users[1],
|
User: types.User{Name: "user1"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
want: []tailcfg.FilterRule{
|
want: []tailcfg.FilterRule{
|
||||||
|
@ -2682,7 +2416,7 @@ func TestReduceFilterRules(t *testing.T) {
|
||||||
node: &types.Node{
|
node: &types.Node{
|
||||||
IPv4: iap("100.64.0.100"),
|
IPv4: iap("100.64.0.100"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::100"),
|
IPv6: iap("fd7a:115c:a1e0::100"),
|
||||||
User: users[3],
|
User: types.User{Name: "user100"},
|
||||||
Hostinfo: &tailcfg.Hostinfo{
|
Hostinfo: &tailcfg.Hostinfo{
|
||||||
RoutableIPs: []netip.Prefix{netip.MustParsePrefix("172.16.0.0/24")},
|
RoutableIPs: []netip.Prefix{netip.MustParsePrefix("172.16.0.0/24")},
|
||||||
},
|
},
|
||||||
|
@ -2692,7 +2426,7 @@ func TestReduceFilterRules(t *testing.T) {
|
||||||
&types.Node{
|
&types.Node{
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
IPv6: iap("fd7a:115c:a1e0::1"),
|
IPv6: iap("fd7a:115c:a1e0::1"),
|
||||||
User: users[1],
|
User: types.User{Name: "user1"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
want: []tailcfg.FilterRule{
|
want: []tailcfg.FilterRule{
|
||||||
|
@ -2720,7 +2454,6 @@ func TestReduceFilterRules(t *testing.T) {
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
got, _ := tt.pol.CompileFilterRules(
|
got, _ := tt.pol.CompileFilterRules(
|
||||||
users,
|
|
||||||
append(tt.peers, tt.node),
|
append(tt.peers, tt.node),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -3728,7 +3461,7 @@ func TestSSHRules(t *testing.T) {
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
got, err := tt.pol.CompileSSHPolicy(&tt.node, []types.User{}, tt.peers)
|
got, err := tt.pol.CompileSSHPolicy(&tt.node, tt.peers)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
if diff := cmp.Diff(tt.want, got); diff != "" {
|
if diff := cmp.Diff(tt.want, got); diff != "" {
|
||||||
|
@ -3811,17 +3544,14 @@ func TestValidExpandTagOwnersInSources(t *testing.T) {
|
||||||
RequestTags: []string{"tag:test"},
|
RequestTags: []string{"tag:test"},
|
||||||
}
|
}
|
||||||
|
|
||||||
user := types.User{
|
|
||||||
Model: gorm.Model{ID: 1},
|
|
||||||
Name: "user1",
|
|
||||||
}
|
|
||||||
|
|
||||||
node := &types.Node{
|
node := &types.Node{
|
||||||
ID: 0,
|
ID: 0,
|
||||||
Hostname: "testnodes",
|
Hostname: "testnodes",
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
UserID: 0,
|
UserID: 0,
|
||||||
User: user,
|
User: types.User{
|
||||||
|
Name: "user1",
|
||||||
|
},
|
||||||
RegisterMethod: util.RegisterMethodAuthKey,
|
RegisterMethod: util.RegisterMethodAuthKey,
|
||||||
Hostinfo: &hostInfo,
|
Hostinfo: &hostInfo,
|
||||||
}
|
}
|
||||||
|
@ -3838,7 +3568,7 @@ func TestValidExpandTagOwnersInSources(t *testing.T) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
got, _, err := GenerateFilterAndSSHRulesForTests(pol, node, types.Nodes{}, []types.User{user})
|
got, _, err := GenerateFilterAndSSHRulesForTests(pol, node, types.Nodes{})
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
want := []tailcfg.FilterRule{
|
want := []tailcfg.FilterRule{
|
||||||
|
@ -3872,7 +3602,6 @@ func TestInvalidTagValidUser(t *testing.T) {
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
UserID: 1,
|
UserID: 1,
|
||||||
User: types.User{
|
User: types.User{
|
||||||
Model: gorm.Model{ID: 1},
|
|
||||||
Name: "user1",
|
Name: "user1",
|
||||||
},
|
},
|
||||||
RegisterMethod: util.RegisterMethodAuthKey,
|
RegisterMethod: util.RegisterMethodAuthKey,
|
||||||
|
@ -3890,12 +3619,7 @@ func TestInvalidTagValidUser(t *testing.T) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
got, _, err := GenerateFilterAndSSHRulesForTests(
|
got, _, err := GenerateFilterAndSSHRulesForTests(pol, node, types.Nodes{})
|
||||||
pol,
|
|
||||||
node,
|
|
||||||
types.Nodes{},
|
|
||||||
[]types.User{node.User},
|
|
||||||
)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
want := []tailcfg.FilterRule{
|
want := []tailcfg.FilterRule{
|
||||||
|
@ -3929,7 +3653,6 @@ func TestValidExpandTagOwnersInDestinations(t *testing.T) {
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
UserID: 1,
|
UserID: 1,
|
||||||
User: types.User{
|
User: types.User{
|
||||||
Model: gorm.Model{ID: 1},
|
|
||||||
Name: "user1",
|
Name: "user1",
|
||||||
},
|
},
|
||||||
RegisterMethod: util.RegisterMethodAuthKey,
|
RegisterMethod: util.RegisterMethodAuthKey,
|
||||||
|
@ -3955,12 +3678,7 @@ func TestValidExpandTagOwnersInDestinations(t *testing.T) {
|
||||||
// c.Assert(rules[0].DstPorts, check.HasLen, 1)
|
// c.Assert(rules[0].DstPorts, check.HasLen, 1)
|
||||||
// c.Assert(rules[0].DstPorts[0].IP, check.Equals, "100.64.0.1/32")
|
// c.Assert(rules[0].DstPorts[0].IP, check.Equals, "100.64.0.1/32")
|
||||||
|
|
||||||
got, _, err := GenerateFilterAndSSHRulesForTests(
|
got, _, err := GenerateFilterAndSSHRulesForTests(pol, node, types.Nodes{})
|
||||||
pol,
|
|
||||||
node,
|
|
||||||
types.Nodes{},
|
|
||||||
[]types.User{node.User},
|
|
||||||
)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
want := []tailcfg.FilterRule{
|
want := []tailcfg.FilterRule{
|
||||||
|
@ -3989,17 +3707,15 @@ func TestValidTagInvalidUser(t *testing.T) {
|
||||||
Hostname: "webserver",
|
Hostname: "webserver",
|
||||||
RequestTags: []string{"tag:webapp"},
|
RequestTags: []string{"tag:webapp"},
|
||||||
}
|
}
|
||||||
user := types.User{
|
|
||||||
Model: gorm.Model{ID: 1},
|
|
||||||
Name: "user1",
|
|
||||||
}
|
|
||||||
|
|
||||||
node := &types.Node{
|
node := &types.Node{
|
||||||
ID: 1,
|
ID: 1,
|
||||||
Hostname: "webserver",
|
Hostname: "webserver",
|
||||||
IPv4: iap("100.64.0.1"),
|
IPv4: iap("100.64.0.1"),
|
||||||
UserID: 1,
|
UserID: 1,
|
||||||
User: user,
|
User: types.User{
|
||||||
|
Name: "user1",
|
||||||
|
},
|
||||||
RegisterMethod: util.RegisterMethodAuthKey,
|
RegisterMethod: util.RegisterMethodAuthKey,
|
||||||
Hostinfo: &hostInfo,
|
Hostinfo: &hostInfo,
|
||||||
}
|
}
|
||||||
|
@ -4014,7 +3730,9 @@ func TestValidTagInvalidUser(t *testing.T) {
|
||||||
Hostname: "user",
|
Hostname: "user",
|
||||||
IPv4: iap("100.64.0.2"),
|
IPv4: iap("100.64.0.2"),
|
||||||
UserID: 1,
|
UserID: 1,
|
||||||
User: user,
|
User: types.User{
|
||||||
|
Name: "user1",
|
||||||
|
},
|
||||||
RegisterMethod: util.RegisterMethodAuthKey,
|
RegisterMethod: util.RegisterMethodAuthKey,
|
||||||
Hostinfo: &hostInfo2,
|
Hostinfo: &hostInfo2,
|
||||||
}
|
}
|
||||||
|
@ -4030,12 +3748,7 @@ func TestValidTagInvalidUser(t *testing.T) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
got, _, err := GenerateFilterAndSSHRulesForTests(
|
got, _, err := GenerateFilterAndSSHRulesForTests(pol, node, types.Nodes{nodes2})
|
||||||
pol,
|
|
||||||
node,
|
|
||||||
types.Nodes{nodes2},
|
|
||||||
[]types.User{user},
|
|
||||||
)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
want := []tailcfg.FilterRule{
|
want := []tailcfg.FilterRule{
|
||||||
|
|
|
@ -105,7 +105,6 @@ type Nameservers struct {
|
||||||
type SqliteConfig struct {
|
type SqliteConfig struct {
|
||||||
Path string
|
Path string
|
||||||
WriteAheadLog bool
|
WriteAheadLog bool
|
||||||
WALAutoCheckPoint int
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type PostgresConfig struct {
|
type PostgresConfig struct {
|
||||||
|
@ -164,10 +163,8 @@ type OIDCConfig struct {
|
||||||
AllowedDomains []string
|
AllowedDomains []string
|
||||||
AllowedUsers []string
|
AllowedUsers []string
|
||||||
AllowedGroups []string
|
AllowedGroups []string
|
||||||
StripEmaildomain bool
|
|
||||||
Expiry time.Duration
|
Expiry time.Duration
|
||||||
UseExpiryFromToken bool
|
UseExpiryFromToken bool
|
||||||
MapLegacyUsers bool
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type DERPConfig struct {
|
type DERPConfig struct {
|
||||||
|
@ -274,14 +271,11 @@ func LoadConfig(path string, isFile bool) error {
|
||||||
viper.SetDefault("database.postgres.conn_max_idle_time_secs", 3600)
|
viper.SetDefault("database.postgres.conn_max_idle_time_secs", 3600)
|
||||||
|
|
||||||
viper.SetDefault("database.sqlite.write_ahead_log", true)
|
viper.SetDefault("database.sqlite.write_ahead_log", true)
|
||||||
viper.SetDefault("database.sqlite.wal_autocheckpoint", 1000) // SQLite default
|
|
||||||
|
|
||||||
viper.SetDefault("oidc.scope", []string{oidc.ScopeOpenID, "profile", "email"})
|
viper.SetDefault("oidc.scope", []string{oidc.ScopeOpenID, "profile", "email"})
|
||||||
viper.SetDefault("oidc.strip_email_domain", true)
|
|
||||||
viper.SetDefault("oidc.only_start_if_oidc_is_available", true)
|
viper.SetDefault("oidc.only_start_if_oidc_is_available", true)
|
||||||
viper.SetDefault("oidc.expiry", "180d")
|
viper.SetDefault("oidc.expiry", "180d")
|
||||||
viper.SetDefault("oidc.use_expiry_from_token", false)
|
viper.SetDefault("oidc.use_expiry_from_token", false)
|
||||||
viper.SetDefault("oidc.map_legacy_users", true)
|
|
||||||
|
|
||||||
viper.SetDefault("logtail.enabled", false)
|
viper.SetDefault("logtail.enabled", false)
|
||||||
viper.SetDefault("randomize_client_port", false)
|
viper.SetDefault("randomize_client_port", false)
|
||||||
|
@ -325,18 +319,14 @@ func validateServerConfig() error {
|
||||||
depr.warn("dns_config.use_username_in_magic_dns")
|
depr.warn("dns_config.use_username_in_magic_dns")
|
||||||
depr.warn("dns.use_username_in_magic_dns")
|
depr.warn("dns.use_username_in_magic_dns")
|
||||||
|
|
||||||
// TODO(kradalby): Reintroduce when strip_email_domain is removed
|
depr.fatal("oidc.strip_email_domain")
|
||||||
// after #2170 is cleaned up
|
|
||||||
// depr.fatal("oidc.strip_email_domain")
|
|
||||||
depr.fatal("dns.use_username_in_musername_in_magic_dns")
|
depr.fatal("dns.use_username_in_musername_in_magic_dns")
|
||||||
depr.fatal("dns_config.use_username_in_musername_in_magic_dns")
|
depr.fatal("dns_config.use_username_in_musername_in_magic_dns")
|
||||||
|
|
||||||
depr.Log()
|
depr.Log()
|
||||||
|
|
||||||
for _, removed := range []string{
|
for _, removed := range []string{
|
||||||
// TODO(kradalby): Reintroduce when strip_email_domain is removed
|
"oidc.strip_email_domain",
|
||||||
// after #2170 is cleaned up
|
|
||||||
// "oidc.strip_email_domain",
|
|
||||||
"dns_config.use_username_in_musername_in_magic_dns",
|
"dns_config.use_username_in_musername_in_magic_dns",
|
||||||
} {
|
} {
|
||||||
if viper.IsSet(removed) {
|
if viper.IsSet(removed) {
|
||||||
|
@ -554,7 +544,6 @@ func databaseConfig() DatabaseConfig {
|
||||||
viper.GetString("database.sqlite.path"),
|
viper.GetString("database.sqlite.path"),
|
||||||
),
|
),
|
||||||
WriteAheadLog: viper.GetBool("database.sqlite.write_ahead_log"),
|
WriteAheadLog: viper.GetBool("database.sqlite.write_ahead_log"),
|
||||||
WALAutoCheckPoint: viper.GetInt("database.sqlite.wal_autocheckpoint"),
|
|
||||||
},
|
},
|
||||||
Postgres: PostgresConfig{
|
Postgres: PostgresConfig{
|
||||||
Host: viper.GetString("database.postgres.host"),
|
Host: viper.GetString("database.postgres.host"),
|
||||||
|
@ -908,10 +897,6 @@ func LoadServerConfig() (*Config, error) {
|
||||||
}
|
}
|
||||||
}(),
|
}(),
|
||||||
UseExpiryFromToken: viper.GetBool("oidc.use_expiry_from_token"),
|
UseExpiryFromToken: viper.GetBool("oidc.use_expiry_from_token"),
|
||||||
// TODO(kradalby): Remove when strip_email_domain is removed
|
|
||||||
// after #2170 is cleaned up
|
|
||||||
StripEmaildomain: viper.GetBool("oidc.strip_email_domain"),
|
|
||||||
MapLegacyUsers: viper.GetBool("oidc.map_legacy_users"),
|
|
||||||
},
|
},
|
||||||
|
|
||||||
LogTail: logTailConfig,
|
LogTail: logTailConfig,
|
||||||
|
|
|
@ -26,7 +26,7 @@ type PreAuthKey struct {
|
||||||
|
|
||||||
func (key *PreAuthKey) Proto() *v1.PreAuthKey {
|
func (key *PreAuthKey) Proto() *v1.PreAuthKey {
|
||||||
protoKey := v1.PreAuthKey{
|
protoKey := v1.PreAuthKey{
|
||||||
User: key.User.Username(),
|
User: key.User.Name,
|
||||||
Id: strconv.FormatUint(key.ID, util.Base10),
|
Id: strconv.FormatUint(key.ID, util.Base10),
|
||||||
Key: key.Key,
|
Key: key.Key,
|
||||||
Ephemeral: key.Ephemeral,
|
Ephemeral: key.Ephemeral,
|
||||||
|
|
|
@ -2,8 +2,6 @@ package types
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"cmp"
|
"cmp"
|
||||||
"database/sql"
|
|
||||||
"net/mail"
|
|
||||||
"strconv"
|
"strconv"
|
||||||
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
|
@ -21,14 +19,10 @@ type UserID uint64
|
||||||
// that contain our machines.
|
// that contain our machines.
|
||||||
type User struct {
|
type User struct {
|
||||||
gorm.Model
|
gorm.Model
|
||||||
// The index `idx_name_provider_identifier` is to enforce uniqueness
|
|
||||||
// between Name and ProviderIdentifier. This ensures that
|
|
||||||
// you can have multiple users with the same name in OIDC,
|
|
||||||
// but not if you only run with CLI users.
|
|
||||||
|
|
||||||
// Username for the user, is used if email is empty
|
// Username for the user, is used if email is empty
|
||||||
// Should not be used, please use Username().
|
// Should not be used, please use Username().
|
||||||
Name string
|
Name string `gorm:"unique"`
|
||||||
|
|
||||||
// Typically the full name of the user
|
// Typically the full name of the user
|
||||||
DisplayName string
|
DisplayName string
|
||||||
|
@ -40,7 +34,7 @@ type User struct {
|
||||||
// Unique identifier of the user from OIDC,
|
// Unique identifier of the user from OIDC,
|
||||||
// comes from `sub` claim in the OIDC token
|
// comes from `sub` claim in the OIDC token
|
||||||
// and is used to lookup the user.
|
// and is used to lookup the user.
|
||||||
ProviderIdentifier sql.NullString
|
ProviderIdentifier string `gorm:"index"`
|
||||||
|
|
||||||
// Provider is the origin of the user account,
|
// Provider is the origin of the user account,
|
||||||
// same as RegistrationMethod, without authkey.
|
// same as RegistrationMethod, without authkey.
|
||||||
|
@ -57,7 +51,7 @@ type User struct {
|
||||||
// should be used throughout headscale, in information returned to the
|
// should be used throughout headscale, in information returned to the
|
||||||
// user and the Policy engine.
|
// user and the Policy engine.
|
||||||
func (u *User) Username() string {
|
func (u *User) Username() string {
|
||||||
return cmp.Or(u.Email, u.Name, u.ProviderIdentifier.String, strconv.FormatUint(uint64(u.ID), 10))
|
return cmp.Or(u.Email, u.Name, u.ProviderIdentifier, strconv.FormatUint(uint64(u.ID), 10))
|
||||||
}
|
}
|
||||||
|
|
||||||
// DisplayNameOrUsername returns the DisplayName if it exists, otherwise
|
// DisplayNameOrUsername returns the DisplayName if it exists, otherwise
|
||||||
|
@ -113,7 +107,7 @@ func (u *User) Proto() *v1.User {
|
||||||
CreatedAt: timestamppb.New(u.CreatedAt),
|
CreatedAt: timestamppb.New(u.CreatedAt),
|
||||||
DisplayName: u.DisplayName,
|
DisplayName: u.DisplayName,
|
||||||
Email: u.Email,
|
Email: u.Email,
|
||||||
ProviderId: u.ProviderIdentifier.String,
|
ProviderId: u.ProviderIdentifier,
|
||||||
Provider: u.Provider,
|
Provider: u.Provider,
|
||||||
ProfilePicUrl: u.ProfilePicURL,
|
ProfilePicUrl: u.ProfilePicURL,
|
||||||
}
|
}
|
||||||
|
@ -122,7 +116,6 @@ func (u *User) Proto() *v1.User {
|
||||||
type OIDCClaims struct {
|
type OIDCClaims struct {
|
||||||
// Sub is the user's unique identifier at the provider.
|
// Sub is the user's unique identifier at the provider.
|
||||||
Sub string `json:"sub"`
|
Sub string `json:"sub"`
|
||||||
Iss string `json:"iss"`
|
|
||||||
|
|
||||||
// Name is the user's full name.
|
// Name is the user's full name.
|
||||||
Name string `json:"name,omitempty"`
|
Name string `json:"name,omitempty"`
|
||||||
|
@ -133,27 +126,13 @@ type OIDCClaims struct {
|
||||||
Username string `json:"preferred_username,omitempty"`
|
Username string `json:"preferred_username,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *OIDCClaims) Identifier() string {
|
|
||||||
return c.Iss + "/" + c.Sub
|
|
||||||
}
|
|
||||||
|
|
||||||
// FromClaim overrides a User from OIDC claims.
|
// FromClaim overrides a User from OIDC claims.
|
||||||
// All fields will be updated, except for the ID.
|
// All fields will be updated, except for the ID.
|
||||||
func (u *User) FromClaim(claims *OIDCClaims) {
|
func (u *User) FromClaim(claims *OIDCClaims) {
|
||||||
err := util.CheckForFQDNRules(claims.Username)
|
u.ProviderIdentifier = claims.Sub
|
||||||
if err == nil {
|
|
||||||
u.Name = claims.Username
|
|
||||||
}
|
|
||||||
|
|
||||||
if claims.EmailVerified {
|
|
||||||
_, err = mail.ParseAddress(claims.Email)
|
|
||||||
if err == nil {
|
|
||||||
u.Email = claims.Email
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
u.ProviderIdentifier = sql.NullString{String: claims.Identifier(), Valid: true}
|
|
||||||
u.DisplayName = claims.Name
|
u.DisplayName = claims.Name
|
||||||
|
u.Email = claims.Email
|
||||||
|
u.Name = claims.Username
|
||||||
u.ProfilePicURL = claims.ProfilePictureURL
|
u.ProfilePicURL = claims.ProfilePictureURL
|
||||||
u.Provider = util.RegisterMethodOIDC
|
u.Provider = util.RegisterMethodOIDC
|
||||||
}
|
}
|
||||||
|
|
|
@ -182,33 +182,3 @@ func GenerateIPv6DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
|
||||||
|
|
||||||
return fqdns
|
return fqdns
|
||||||
}
|
}
|
||||||
|
|
||||||
// TODO(kradalby): Reintroduce when strip_email_domain is removed
|
|
||||||
// after #2170 is cleaned up
|
|
||||||
// DEPRECATED: DO NOT USE
|
|
||||||
// NormalizeToFQDNRules will replace forbidden chars in user
|
|
||||||
// it can also return an error if the user doesn't respect RFC 952 and 1123.
|
|
||||||
func NormalizeToFQDNRules(name string, stripEmailDomain bool) (string, error) {
|
|
||||||
|
|
||||||
name = strings.ToLower(name)
|
|
||||||
name = strings.ReplaceAll(name, "'", "")
|
|
||||||
atIdx := strings.Index(name, "@")
|
|
||||||
if stripEmailDomain && atIdx > 0 {
|
|
||||||
name = name[:atIdx]
|
|
||||||
} else {
|
|
||||||
name = strings.ReplaceAll(name, "@", ".")
|
|
||||||
}
|
|
||||||
name = invalidCharsInUserRegex.ReplaceAllString(name, "-")
|
|
||||||
|
|
||||||
for _, elt := range strings.Split(name, ".") {
|
|
||||||
if len(elt) > LabelHostnameLength {
|
|
||||||
return "", fmt.Errorf(
|
|
||||||
"label %v is more than 63 chars: %w",
|
|
||||||
elt,
|
|
||||||
ErrInvalidUserName,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return name, nil
|
|
||||||
}
|
|
||||||
|
|
|
@ -3,7 +3,6 @@ package integration
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"crypto/tls"
|
"crypto/tls"
|
||||||
"encoding/json"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
|
@ -11,19 +10,14 @@ import (
|
||||||
"net"
|
"net"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/netip"
|
"net/netip"
|
||||||
"sort"
|
|
||||||
"strconv"
|
"strconv"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/google/go-cmp/cmp"
|
|
||||||
"github.com/google/go-cmp/cmp/cmpopts"
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
|
||||||
"github.com/juanfont/headscale/hscontrol/types"
|
"github.com/juanfont/headscale/hscontrol/types"
|
||||||
"github.com/juanfont/headscale/hscontrol/util"
|
"github.com/juanfont/headscale/hscontrol/util"
|
||||||
"github.com/juanfont/headscale/integration/dockertestutil"
|
"github.com/juanfont/headscale/integration/dockertestutil"
|
||||||
"github.com/juanfont/headscale/integration/hsic"
|
"github.com/juanfont/headscale/integration/hsic"
|
||||||
"github.com/oauth2-proxy/mockoidc"
|
|
||||||
"github.com/ory/dockertest/v3"
|
"github.com/ory/dockertest/v3"
|
||||||
"github.com/ory/dockertest/v3/docker"
|
"github.com/ory/dockertest/v3/docker"
|
||||||
"github.com/samber/lo"
|
"github.com/samber/lo"
|
||||||
|
@ -54,34 +48,20 @@ func TestOIDCAuthenticationPingAll(t *testing.T) {
|
||||||
scenario := AuthOIDCScenario{
|
scenario := AuthOIDCScenario{
|
||||||
Scenario: baseScenario,
|
Scenario: baseScenario,
|
||||||
}
|
}
|
||||||
// defer scenario.ShutdownAssertNoPanics(t)
|
defer scenario.ShutdownAssertNoPanics(t)
|
||||||
|
|
||||||
// Logins to MockOIDC is served by a queue with a strict order,
|
|
||||||
// if we use more than one node per user, the order of the logins
|
|
||||||
// will not be deterministic and the test will fail.
|
|
||||||
spec := map[string]int{
|
spec := map[string]int{
|
||||||
"user1": 1,
|
"user1": len(MustTestVersions),
|
||||||
"user2": 1,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
mockusers := []mockoidc.MockUser{
|
oidcConfig, err := scenario.runMockOIDC(defaultAccessTTL)
|
||||||
oidcMockUser("user1", true),
|
|
||||||
oidcMockUser("user2", false),
|
|
||||||
}
|
|
||||||
|
|
||||||
oidcConfig, err := scenario.runMockOIDC(defaultAccessTTL, mockusers)
|
|
||||||
assertNoErrf(t, "failed to run mock OIDC server: %s", err)
|
assertNoErrf(t, "failed to run mock OIDC server: %s", err)
|
||||||
defer scenario.mockOIDC.Close()
|
|
||||||
|
|
||||||
oidcMap := map[string]string{
|
oidcMap := map[string]string{
|
||||||
"HEADSCALE_OIDC_ISSUER": oidcConfig.Issuer,
|
"HEADSCALE_OIDC_ISSUER": oidcConfig.Issuer,
|
||||||
"HEADSCALE_OIDC_CLIENT_ID": oidcConfig.ClientID,
|
"HEADSCALE_OIDC_CLIENT_ID": oidcConfig.ClientID,
|
||||||
"CREDENTIALS_DIRECTORY_TEST": "/tmp",
|
"CREDENTIALS_DIRECTORY_TEST": "/tmp",
|
||||||
"HEADSCALE_OIDC_CLIENT_SECRET_PATH": "${CREDENTIALS_DIRECTORY_TEST}/hs_client_oidc_secret",
|
"HEADSCALE_OIDC_CLIENT_SECRET_PATH": "${CREDENTIALS_DIRECTORY_TEST}/hs_client_oidc_secret",
|
||||||
// TODO(kradalby): Remove when strip_email_domain is removed
|
|
||||||
// after #2170 is cleaned up
|
|
||||||
"HEADSCALE_OIDC_MAP_LEGACY_USERS": "0",
|
|
||||||
"HEADSCALE_OIDC_STRIP_EMAIL_DOMAIN": "0",
|
|
||||||
}
|
}
|
||||||
|
|
||||||
err = scenario.CreateHeadscaleEnv(
|
err = scenario.CreateHeadscaleEnv(
|
||||||
|
@ -111,55 +91,6 @@ func TestOIDCAuthenticationPingAll(t *testing.T) {
|
||||||
|
|
||||||
success := pingAllHelper(t, allClients, allAddrs)
|
success := pingAllHelper(t, allClients, allAddrs)
|
||||||
t.Logf("%d successful pings out of %d", success, len(allClients)*len(allIps))
|
t.Logf("%d successful pings out of %d", success, len(allClients)*len(allIps))
|
||||||
|
|
||||||
headscale, err := scenario.Headscale()
|
|
||||||
assertNoErr(t, err)
|
|
||||||
|
|
||||||
var listUsers []v1.User
|
|
||||||
err = executeAndUnmarshal(headscale,
|
|
||||||
[]string{
|
|
||||||
"headscale",
|
|
||||||
"users",
|
|
||||||
"list",
|
|
||||||
"--output",
|
|
||||||
"json",
|
|
||||||
},
|
|
||||||
&listUsers,
|
|
||||||
)
|
|
||||||
assertNoErr(t, err)
|
|
||||||
|
|
||||||
want := []v1.User{
|
|
||||||
{
|
|
||||||
Id: "1",
|
|
||||||
Name: "user1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "2",
|
|
||||||
Name: "user1",
|
|
||||||
Email: "user1@headscale.net",
|
|
||||||
Provider: "oidc",
|
|
||||||
ProviderId: oidcConfig.Issuer + "/user1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "3",
|
|
||||||
Name: "user2",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "4",
|
|
||||||
Name: "user2",
|
|
||||||
Email: "", // Unverified
|
|
||||||
Provider: "oidc",
|
|
||||||
ProviderId: oidcConfig.Issuer + "/user2",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
sort.Slice(listUsers, func(i, j int) bool {
|
|
||||||
return listUsers[i].Id < listUsers[j].Id
|
|
||||||
})
|
|
||||||
|
|
||||||
if diff := cmp.Diff(want, listUsers, cmpopts.IgnoreUnexported(v1.User{}), cmpopts.IgnoreFields(v1.User{}, "CreatedAt")); diff != "" {
|
|
||||||
t.Fatalf("unexpected users: %s", diff)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// This test is really flaky.
|
// This test is really flaky.
|
||||||
|
@ -180,16 +111,11 @@ func TestOIDCExpireNodesBasedOnTokenExpiry(t *testing.T) {
|
||||||
defer scenario.ShutdownAssertNoPanics(t)
|
defer scenario.ShutdownAssertNoPanics(t)
|
||||||
|
|
||||||
spec := map[string]int{
|
spec := map[string]int{
|
||||||
"user1": 1,
|
"user1": 3,
|
||||||
"user2": 1,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
oidcConfig, err := scenario.runMockOIDC(shortAccessTTL, []mockoidc.MockUser{
|
oidcConfig, err := scenario.runMockOIDC(shortAccessTTL)
|
||||||
oidcMockUser("user1", true),
|
|
||||||
oidcMockUser("user2", false),
|
|
||||||
})
|
|
||||||
assertNoErrf(t, "failed to run mock OIDC server: %s", err)
|
assertNoErrf(t, "failed to run mock OIDC server: %s", err)
|
||||||
defer scenario.mockOIDC.Close()
|
|
||||||
|
|
||||||
oidcMap := map[string]string{
|
oidcMap := map[string]string{
|
||||||
"HEADSCALE_OIDC_ISSUER": oidcConfig.Issuer,
|
"HEADSCALE_OIDC_ISSUER": oidcConfig.Issuer,
|
||||||
|
@ -233,297 +159,6 @@ func TestOIDCExpireNodesBasedOnTokenExpiry(t *testing.T) {
|
||||||
assertTailscaleNodesLogout(t, allClients)
|
assertTailscaleNodesLogout(t, allClients)
|
||||||
}
|
}
|
||||||
|
|
||||||
// TODO(kradalby):
|
|
||||||
// - Test that creates a new user when one exists when migration is turned off
|
|
||||||
// - Test that takes over a user when one exists when migration is turned on
|
|
||||||
// - But email is not verified
|
|
||||||
// - stripped email domain on/off
|
|
||||||
func TestOIDC024UserCreation(t *testing.T) {
|
|
||||||
IntegrationSkip(t)
|
|
||||||
|
|
||||||
tests := []struct {
|
|
||||||
name string
|
|
||||||
config map[string]string
|
|
||||||
emailVerified bool
|
|
||||||
cliUsers []string
|
|
||||||
oidcUsers []string
|
|
||||||
want func(iss string) []v1.User
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
name: "no-migration-verified-email",
|
|
||||||
config: map[string]string{
|
|
||||||
"HEADSCALE_OIDC_MAP_LEGACY_USERS": "0",
|
|
||||||
},
|
|
||||||
emailVerified: true,
|
|
||||||
cliUsers: []string{"user1", "user2"},
|
|
||||||
oidcUsers: []string{"user1", "user2"},
|
|
||||||
want: func(iss string) []v1.User {
|
|
||||||
return []v1.User{
|
|
||||||
{
|
|
||||||
Id: "1",
|
|
||||||
Name: "user1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "2",
|
|
||||||
Name: "user1",
|
|
||||||
Email: "user1@headscale.net",
|
|
||||||
Provider: "oidc",
|
|
||||||
ProviderId: iss + "/user1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "3",
|
|
||||||
Name: "user2",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "4",
|
|
||||||
Name: "user2",
|
|
||||||
Email: "user2@headscale.net",
|
|
||||||
Provider: "oidc",
|
|
||||||
ProviderId: iss + "/user2",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "no-migration-not-verified-email",
|
|
||||||
config: map[string]string{
|
|
||||||
"HEADSCALE_OIDC_MAP_LEGACY_USERS": "0",
|
|
||||||
},
|
|
||||||
emailVerified: false,
|
|
||||||
cliUsers: []string{"user1", "user2"},
|
|
||||||
oidcUsers: []string{"user1", "user2"},
|
|
||||||
want: func(iss string) []v1.User {
|
|
||||||
return []v1.User{
|
|
||||||
{
|
|
||||||
Id: "1",
|
|
||||||
Name: "user1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "2",
|
|
||||||
Name: "user1",
|
|
||||||
Provider: "oidc",
|
|
||||||
ProviderId: iss + "/user1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "3",
|
|
||||||
Name: "user2",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "4",
|
|
||||||
Name: "user2",
|
|
||||||
Provider: "oidc",
|
|
||||||
ProviderId: iss + "/user2",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "migration-strip-domains-verified-email",
|
|
||||||
config: map[string]string{
|
|
||||||
"HEADSCALE_OIDC_MAP_LEGACY_USERS": "1",
|
|
||||||
"HEADSCALE_OIDC_STRIP_EMAIL_DOMAIN": "1",
|
|
||||||
},
|
|
||||||
emailVerified: true,
|
|
||||||
cliUsers: []string{"user1", "user2"},
|
|
||||||
oidcUsers: []string{"user1", "user2"},
|
|
||||||
want: func(iss string) []v1.User {
|
|
||||||
return []v1.User{
|
|
||||||
{
|
|
||||||
Id: "1",
|
|
||||||
Name: "user1",
|
|
||||||
Email: "user1@headscale.net",
|
|
||||||
Provider: "oidc",
|
|
||||||
ProviderId: iss + "/user1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "2",
|
|
||||||
Name: "user2",
|
|
||||||
Email: "user2@headscale.net",
|
|
||||||
Provider: "oidc",
|
|
||||||
ProviderId: iss + "/user2",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "migration-strip-domains-not-verified-email",
|
|
||||||
config: map[string]string{
|
|
||||||
"HEADSCALE_OIDC_MAP_LEGACY_USERS": "1",
|
|
||||||
"HEADSCALE_OIDC_STRIP_EMAIL_DOMAIN": "1",
|
|
||||||
},
|
|
||||||
emailVerified: false,
|
|
||||||
cliUsers: []string{"user1", "user2"},
|
|
||||||
oidcUsers: []string{"user1", "user2"},
|
|
||||||
want: func(iss string) []v1.User {
|
|
||||||
return []v1.User{
|
|
||||||
{
|
|
||||||
Id: "1",
|
|
||||||
Name: "user1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "2",
|
|
||||||
Name: "user1",
|
|
||||||
Provider: "oidc",
|
|
||||||
ProviderId: iss + "/user1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "3",
|
|
||||||
Name: "user2",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "4",
|
|
||||||
Name: "user2",
|
|
||||||
Provider: "oidc",
|
|
||||||
ProviderId: iss + "/user2",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "migration-no-strip-domains-verified-email",
|
|
||||||
config: map[string]string{
|
|
||||||
"HEADSCALE_OIDC_MAP_LEGACY_USERS": "1",
|
|
||||||
"HEADSCALE_OIDC_STRIP_EMAIL_DOMAIN": "0",
|
|
||||||
},
|
|
||||||
emailVerified: true,
|
|
||||||
cliUsers: []string{"user1.headscale.net", "user2.headscale.net"},
|
|
||||||
oidcUsers: []string{"user1", "user2"},
|
|
||||||
want: func(iss string) []v1.User {
|
|
||||||
return []v1.User{
|
|
||||||
// Hmm I think we will have to overwrite the initial name here
|
|
||||||
// createuser with "user1.headscale.net", but oidc with "user1"
|
|
||||||
{
|
|
||||||
Id: "1",
|
|
||||||
Name: "user1",
|
|
||||||
Email: "user1@headscale.net",
|
|
||||||
Provider: "oidc",
|
|
||||||
ProviderId: iss + "/user1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "2",
|
|
||||||
Name: "user2",
|
|
||||||
Email: "user2@headscale.net",
|
|
||||||
Provider: "oidc",
|
|
||||||
ProviderId: iss + "/user2",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "migration-no-strip-domains-not-verified-email",
|
|
||||||
config: map[string]string{
|
|
||||||
"HEADSCALE_OIDC_MAP_LEGACY_USERS": "1",
|
|
||||||
"HEADSCALE_OIDC_STRIP_EMAIL_DOMAIN": "0",
|
|
||||||
},
|
|
||||||
emailVerified: false,
|
|
||||||
cliUsers: []string{"user1.headscale.net", "user2.headscale.net"},
|
|
||||||
oidcUsers: []string{"user1", "user2"},
|
|
||||||
want: func(iss string) []v1.User {
|
|
||||||
return []v1.User{
|
|
||||||
{
|
|
||||||
Id: "1",
|
|
||||||
Name: "user1.headscale.net",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "2",
|
|
||||||
Name: "user1",
|
|
||||||
Provider: "oidc",
|
|
||||||
ProviderId: iss + "/user1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "3",
|
|
||||||
Name: "user2.headscale.net",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Id: "4",
|
|
||||||
Name: "user2",
|
|
||||||
Provider: "oidc",
|
|
||||||
ProviderId: iss + "/user2",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
|
||||||
baseScenario, err := NewScenario(dockertestMaxWait())
|
|
||||||
assertNoErr(t, err)
|
|
||||||
|
|
||||||
scenario := AuthOIDCScenario{
|
|
||||||
Scenario: baseScenario,
|
|
||||||
}
|
|
||||||
defer scenario.ShutdownAssertNoPanics(t)
|
|
||||||
|
|
||||||
spec := map[string]int{}
|
|
||||||
for _, user := range tt.cliUsers {
|
|
||||||
spec[user] = 1
|
|
||||||
}
|
|
||||||
|
|
||||||
var mockusers []mockoidc.MockUser
|
|
||||||
for _, user := range tt.oidcUsers {
|
|
||||||
mockusers = append(mockusers, oidcMockUser(user, tt.emailVerified))
|
|
||||||
}
|
|
||||||
|
|
||||||
oidcConfig, err := scenario.runMockOIDC(defaultAccessTTL, mockusers)
|
|
||||||
assertNoErrf(t, "failed to run mock OIDC server: %s", err)
|
|
||||||
defer scenario.mockOIDC.Close()
|
|
||||||
|
|
||||||
oidcMap := map[string]string{
|
|
||||||
"HEADSCALE_OIDC_ISSUER": oidcConfig.Issuer,
|
|
||||||
"HEADSCALE_OIDC_CLIENT_ID": oidcConfig.ClientID,
|
|
||||||
"CREDENTIALS_DIRECTORY_TEST": "/tmp",
|
|
||||||
"HEADSCALE_OIDC_CLIENT_SECRET_PATH": "${CREDENTIALS_DIRECTORY_TEST}/hs_client_oidc_secret",
|
|
||||||
}
|
|
||||||
|
|
||||||
for k, v := range tt.config {
|
|
||||||
oidcMap[k] = v
|
|
||||||
}
|
|
||||||
|
|
||||||
err = scenario.CreateHeadscaleEnv(
|
|
||||||
spec,
|
|
||||||
hsic.WithTestName("oidcmigration"),
|
|
||||||
hsic.WithConfigEnv(oidcMap),
|
|
||||||
hsic.WithTLS(),
|
|
||||||
hsic.WithHostnameAsServerURL(),
|
|
||||||
hsic.WithFileInContainer("/tmp/hs_client_oidc_secret", []byte(oidcConfig.ClientSecret)),
|
|
||||||
)
|
|
||||||
assertNoErrHeadscaleEnv(t, err)
|
|
||||||
|
|
||||||
// Ensure that the nodes have logged in, this is what
|
|
||||||
// triggers user creation via OIDC.
|
|
||||||
err = scenario.WaitForTailscaleSync()
|
|
||||||
assertNoErrSync(t, err)
|
|
||||||
|
|
||||||
headscale, err := scenario.Headscale()
|
|
||||||
assertNoErr(t, err)
|
|
||||||
|
|
||||||
want := tt.want(oidcConfig.Issuer)
|
|
||||||
|
|
||||||
var listUsers []v1.User
|
|
||||||
err = executeAndUnmarshal(headscale,
|
|
||||||
[]string{
|
|
||||||
"headscale",
|
|
||||||
"users",
|
|
||||||
"list",
|
|
||||||
"--output",
|
|
||||||
"json",
|
|
||||||
},
|
|
||||||
&listUsers,
|
|
||||||
)
|
|
||||||
assertNoErr(t, err)
|
|
||||||
|
|
||||||
sort.Slice(listUsers, func(i, j int) bool {
|
|
||||||
return listUsers[i].Id < listUsers[j].Id
|
|
||||||
})
|
|
||||||
|
|
||||||
if diff := cmp.Diff(want, listUsers, cmpopts.IgnoreUnexported(v1.User{}), cmpopts.IgnoreFields(v1.User{}, "CreatedAt")); diff != "" {
|
|
||||||
t.Errorf("unexpected users: %s", diff)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *AuthOIDCScenario) CreateHeadscaleEnv(
|
func (s *AuthOIDCScenario) CreateHeadscaleEnv(
|
||||||
users map[string]int,
|
users map[string]int,
|
||||||
opts ...hsic.Option,
|
opts ...hsic.Option,
|
||||||
|
@ -539,13 +174,6 @@ func (s *AuthOIDCScenario) CreateHeadscaleEnv(
|
||||||
}
|
}
|
||||||
|
|
||||||
for userName, clientCount := range users {
|
for userName, clientCount := range users {
|
||||||
if clientCount != 1 {
|
|
||||||
// OIDC scenario only supports one client per user.
|
|
||||||
// This is because the MockOIDC server can only serve login
|
|
||||||
// requests based on a queue it has been given on startup.
|
|
||||||
// We currently only populates it with one login request per user.
|
|
||||||
return fmt.Errorf("client count must be 1 for OIDC scenario.")
|
|
||||||
}
|
|
||||||
log.Printf("creating user %s with %d clients", userName, clientCount)
|
log.Printf("creating user %s with %d clients", userName, clientCount)
|
||||||
err = s.CreateUser(userName)
|
err = s.CreateUser(userName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -566,7 +194,7 @@ func (s *AuthOIDCScenario) CreateHeadscaleEnv(
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *AuthOIDCScenario) runMockOIDC(accessTTL time.Duration, users []mockoidc.MockUser) (*types.OIDCConfig, error) {
|
func (s *AuthOIDCScenario) runMockOIDC(accessTTL time.Duration) (*types.OIDCConfig, error) {
|
||||||
port, err := dockertestutil.RandomFreeHostPort()
|
port, err := dockertestutil.RandomFreeHostPort()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("could not find an open port: %s", err)
|
log.Fatalf("could not find an open port: %s", err)
|
||||||
|
@ -577,11 +205,6 @@ func (s *AuthOIDCScenario) runMockOIDC(accessTTL time.Duration, users []mockoidc
|
||||||
|
|
||||||
hostname := fmt.Sprintf("hs-oidcmock-%s", hash)
|
hostname := fmt.Sprintf("hs-oidcmock-%s", hash)
|
||||||
|
|
||||||
usersJSON, err := json.Marshal(users)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
mockOidcOptions := &dockertest.RunOptions{
|
mockOidcOptions := &dockertest.RunOptions{
|
||||||
Name: hostname,
|
Name: hostname,
|
||||||
Cmd: []string{"headscale", "mockoidc"},
|
Cmd: []string{"headscale", "mockoidc"},
|
||||||
|
@ -596,12 +219,11 @@ func (s *AuthOIDCScenario) runMockOIDC(accessTTL time.Duration, users []mockoidc
|
||||||
"MOCKOIDC_CLIENT_ID=superclient",
|
"MOCKOIDC_CLIENT_ID=superclient",
|
||||||
"MOCKOIDC_CLIENT_SECRET=supersecret",
|
"MOCKOIDC_CLIENT_SECRET=supersecret",
|
||||||
fmt.Sprintf("MOCKOIDC_ACCESS_TTL=%s", accessTTL.String()),
|
fmt.Sprintf("MOCKOIDC_ACCESS_TTL=%s", accessTTL.String()),
|
||||||
fmt.Sprintf("MOCKOIDC_USERS=%s", string(usersJSON)),
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
headscaleBuildOptions := &dockertest.BuildOptions{
|
headscaleBuildOptions := &dockertest.BuildOptions{
|
||||||
Dockerfile: hsic.IntegrationTestDockerFileName,
|
Dockerfile: "Dockerfile.debug",
|
||||||
ContextDir: dockerContextPath,
|
ContextDir: dockerContextPath,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -688,6 +310,7 @@ func (s *AuthOIDCScenario) runTailscaleUp(
|
||||||
|
|
||||||
log.Printf("%s login url: %s\n", c.Hostname(), loginURL.String())
|
log.Printf("%s login url: %s\n", c.Hostname(), loginURL.String())
|
||||||
|
|
||||||
|
if err := s.pool.Retry(func() error {
|
||||||
log.Printf("%s logging in with url", c.Hostname())
|
log.Printf("%s logging in with url", c.Hostname())
|
||||||
httpClient := &http.Client{Transport: insecureTransport}
|
httpClient := &http.Client{Transport: insecureTransport}
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
|
@ -706,8 +329,6 @@ func (s *AuthOIDCScenario) runTailscaleUp(
|
||||||
|
|
||||||
if resp.StatusCode != http.StatusOK {
|
if resp.StatusCode != http.StatusOK {
|
||||||
log.Printf("%s response code of oidc login request was %s", c.Hostname(), resp.Status)
|
log.Printf("%s response code of oidc login request was %s", c.Hostname(), resp.Status)
|
||||||
body, _ := io.ReadAll(resp.Body)
|
|
||||||
log.Printf("body: %s", body)
|
|
||||||
|
|
||||||
return errStatusCodeNotOK
|
return errStatusCodeNotOK
|
||||||
}
|
}
|
||||||
|
@ -721,7 +342,13 @@ func (s *AuthOIDCScenario) runTailscaleUp(
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
log.Printf("Finished request for %s to join tailnet", c.Hostname())
|
log.Printf("Finished request for %s to join tailnet", c.Hostname())
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
|
|
||||||
|
@ -768,12 +395,3 @@ func assertTailscaleNodesLogout(t *testing.T, clients []TailscaleClient) {
|
||||||
assert.Equal(t, "NeedsLogin", status.BackendState)
|
assert.Equal(t, "NeedsLogin", status.BackendState)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func oidcMockUser(username string, emailVerified bool) mockoidc.MockUser {
|
|
||||||
return mockoidc.MockUser{
|
|
||||||
Subject: username,
|
|
||||||
PreferredUsername: username,
|
|
||||||
Email: fmt.Sprintf("%s@headscale.net", username),
|
|
||||||
EmailVerified: emailVerified,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
|
@ -213,9 +213,7 @@ func TestPreAuthKeyCommand(t *testing.T) {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
tags := listedPreAuthKeys[index].GetAclTags()
|
assert.Equal(t, []string{"tag:test1", "tag:test2"}, listedPreAuthKeys[index].GetAclTags())
|
||||||
sort.Strings(tags)
|
|
||||||
assert.Equal(t, []string{"tag:test1", "tag:test2"}, tags)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Test key expiry
|
// Test key expiry
|
||||||
|
|
|
@ -74,7 +74,7 @@ func ExecuteCommand(
|
||||||
select {
|
select {
|
||||||
case res := <-resultChan:
|
case res := <-resultChan:
|
||||||
if res.err != nil {
|
if res.err != nil {
|
||||||
return stdout.String(), stderr.String(), fmt.Errorf("command failed, stderr: %s: %w", stderr.String(), res.err)
|
return stdout.String(), stderr.String(), res.err
|
||||||
}
|
}
|
||||||
|
|
||||||
if res.exitCode != 0 {
|
if res.exitCode != 0 {
|
||||||
|
@ -83,12 +83,12 @@ func ExecuteCommand(
|
||||||
// log.Println("stdout: ", stdout.String())
|
// log.Println("stdout: ", stdout.String())
|
||||||
// log.Println("stderr: ", stderr.String())
|
// log.Println("stderr: ", stderr.String())
|
||||||
|
|
||||||
return stdout.String(), stderr.String(), fmt.Errorf("command failed, stderr: %s: %w", stderr.String(), ErrDockertestCommandFailed)
|
return stdout.String(), stderr.String(), ErrDockertestCommandFailed
|
||||||
}
|
}
|
||||||
|
|
||||||
return stdout.String(), stderr.String(), nil
|
return stdout.String(), stderr.String(), nil
|
||||||
case <-time.After(execConfig.timeout):
|
case <-time.After(execConfig.timeout):
|
||||||
|
|
||||||
return stdout.String(), stderr.String(), fmt.Errorf("command failed, stderr: %s: %w", stderr.String(), ErrDockertestCommandTimeout)
|
return stdout.String(), stderr.String(), ErrDockertestCommandTimeout
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -37,7 +37,6 @@ const (
|
||||||
tlsCertPath = "/etc/headscale/tls.cert"
|
tlsCertPath = "/etc/headscale/tls.cert"
|
||||||
tlsKeyPath = "/etc/headscale/tls.key"
|
tlsKeyPath = "/etc/headscale/tls.key"
|
||||||
headscaleDefaultPort = 8080
|
headscaleDefaultPort = 8080
|
||||||
IntegrationTestDockerFileName = "Dockerfile.integration"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var errHeadscaleStatusCodeNotOk = errors.New("headscale status code not ok")
|
var errHeadscaleStatusCodeNotOk = errors.New("headscale status code not ok")
|
||||||
|
@ -304,7 +303,7 @@ func New(
|
||||||
}
|
}
|
||||||
|
|
||||||
headscaleBuildOptions := &dockertest.BuildOptions{
|
headscaleBuildOptions := &dockertest.BuildOptions{
|
||||||
Dockerfile: IntegrationTestDockerFileName,
|
Dockerfile: "Dockerfile.debug",
|
||||||
ContextDir: dockerContextPath,
|
ContextDir: dockerContextPath,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue