Compare commits

..

16 commits

Author SHA1 Message Date
Kristoffer Dalby
4f57410a5b
fix constraints
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-22 17:50:55 +01:00
Kristoffer Dalby
60b9595929
fix nil in test
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-22 17:50:55 +01:00
Kristoffer Dalby
0dd5956756
fix constraints
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-22 17:50:55 +01:00
Kristoffer Dalby
7c92acb50c
nits
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-22 17:50:55 +01:00
Kristoffer Dalby
33c8bbcef8
use userID instead of username everywhere
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-22 17:50:53 +01:00
Kristoffer Dalby
d6fedd117e
make preauthkey tags test stable
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-22 17:50:00 +01:00
Kristoffer Dalby
69b9abaa6c
fix oidc test, add tests for migration
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-22 17:49:08 +01:00
Kristoffer Dalby
8059d475a4
restore strip_email_domain for migration
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-22 17:49:08 +01:00
Kristoffer Dalby
939c233b8d
add iss to identifier, only set email if verified
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-22 17:49:08 +01:00
Kristoffer Dalby
8302291e38
add @ to end of username if not present
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-22 17:49:08 +01:00
Kristoffer Dalby
9f56a723ef
remove log print
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-22 17:49:08 +01:00
Kristoffer Dalby
d7363e1c14
update changelog
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-22 17:49:08 +01:00
Kristoffer Dalby
1f73616f90
Harden OIDC migration and make optional
This commit hardens the migration part of the OIDC from
the old username based approach to the new sub based approach
and makes it possible for the operator to opt out entirely.

Fixes #1990

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-22 17:49:08 +01:00
Kristoffer Dalby
a6b19e85db
more linter fixups (#2212)
* linter fixes

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>

* conf

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>

* update nix hash

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>

---------

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-22 15:54:58 +00:00
ArcticLampyrid
edf9e25001
feat: support client verify for derp (add integration tests) (#2046)
Some checks are pending
Build / build (push) Waiting to run
Build documentation / build (push) Waiting to run
Build documentation / deploy (push) Blocked by required conditions
Tests / test (push) Waiting to run
* feat: support client verify for derp

* docs: fix doc for integration test

* tests: add integration test for DERP verify endpoint

* tests: use `tailcfg.DERPMap` instead of `[]byte`

* refactor: introduce func `ContainsNodeKey`

* tests(dsic): use string builder for cmd args

* ci: fix tests order

* tests: fix derper failure

* chore: cleanup

* tests(verify-client): perfer to use `CreateHeadscaleEnv`

* refactor(verify-client): simplify error handling

* tests: fix `TestDERPVerifyEndpoint`

* refactor: make `doVerify` a seperated func

---------

Co-authored-by: 117503445 <t117503445@gmail.com>
2024-11-22 13:23:05 +01:00
Motiejus Jakštys
c6336adb01
config: loosen up BaseDomain and ServerURL checks (#2248)
* config: loosen up BaseDomain and ServerURL checks

Requirements [here][1]:

> OK:
> server_url: headscale.com, base: clients.headscale.com
> server_url: headscale.com, base: headscale.net
>
> Not OK:
> server_url: server.headscale.com, base: headscale.com
>
> Essentially we have to prevent the possibility where the headscale
> server has a URL which can also be assigned to a node.
>
> So for the Not OK scenario:
>
> if the server is: server.headscale.com, and a node joins with the name
> server, it will be assigned server.headscale.com and that will break
> the connection for nodes which will now try to connect to that node
> instead of the headscale server.

Fixes #2210

[1]: https://github.com/juanfont/headscale/issues/2210#issuecomment-2488165187

* server_url and base_domain: re-word error message, fix a one-off bug and add a test case for the bug.

* lint

* lint again
2024-11-22 13:21:44 +01:00
34 changed files with 1401 additions and 536 deletions

View file

@ -39,6 +39,7 @@ jobs:
- TestNodeMoveCommand
- TestPolicyCommand
- TestPolicyBrokenConfigCommand
- TestDERPVerifyEndpoint
- TestResolveMagicDNS
- TestValidateResolvConf
- TestDERPServerScenario

View file

@ -27,6 +27,7 @@ linters:
- nolintlint
- musttag # causes issues with imported libs
- depguard
- exportloopref
# We should strive to enable these:
- wrapcheck
@ -56,9 +57,14 @@ linters-settings:
- ok
- c
- tt
- tx
- rx
gocritic:
disabled-checks:
- appendAssign
# TODO(kradalby): Remove this
- ifElseChain
nlreturn:
block-size: 4

View file

@ -89,6 +89,7 @@ This will also affect the way you [reference users in policies](https://github.c
- Added conversion of 'Hostname' to 'givenName' in a node with FQDN rules applied [#2198](https://github.com/juanfont/headscale/pull/2198)
- Fixed updating of hostname and givenName when it is updated in HostInfo [#2199](https://github.com/juanfont/headscale/pull/2199)
- Fixed missing `stable-debug` container tag [#2232](https://github.com/juanfont/headscale/pr/2232)
- Loosened up `server_url` and `base_domain` check. It was overly strict in some cases.
## 0.23.0 (2024-09-18)

19
Dockerfile.derper Normal file
View file

@ -0,0 +1,19 @@
# For testing purposes only
FROM golang:alpine AS build-env
WORKDIR /go/src
RUN apk add --no-cache git
ARG VERSION_BRANCH=main
RUN git clone https://github.com/tailscale/tailscale.git --branch=$VERSION_BRANCH --depth=1
WORKDIR /go/src/tailscale
ARG TARGETARCH
RUN GOARCH=$TARGETARCH go install -v ./cmd/derper
FROM alpine:3.18
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables curl
COPY --from=build-env /go/bin/* /usr/local/bin/
ENTRYPOINT [ "/usr/local/bin/derper" ]

View file

@ -457,6 +457,8 @@ func (h *Headscale) createRouter(grpcMux *grpcRuntime.ServeMux) *mux.Router {
router.HandleFunc("/swagger/v1/openapiv2.json", headscale.SwaggerAPIv1).
Methods(http.MethodGet)
router.HandleFunc("/verify", h.VerifyHandler).Methods(http.MethodPost)
if h.cfg.DERP.ServerEnabled {
router.HandleFunc("/derp", h.DERPServer.DERPHandler)
router.HandleFunc("/derp/probe", derpServer.DERPProbeHandler)

View file

@ -498,6 +498,25 @@ func NewHeadscaleDatabase(
return err
}
// Set up indexes and unique constraints outside of GORM, it does not support
// conditional unique constraints.
// This ensures the following:
// - A user name and provider_identifier is unique
// - A provider_identifier is unique
// - A user name is unique if there is no provider_identifier is not set
for _, idx := range []string{
"DROP INDEX IF EXISTS `idx_provider_identifier`",
"DROP INDEX IF EXISTS `idx_name_provider_identifier`",
"CREATE UNIQUE INDEX IF NOT EXISTS `idx_provider_identifier` ON `users` (`provider_identifier`) WHERE provider_identifier IS NOT NULL;",
"CREATE UNIQUE INDEX IF NOT EXISTS `idx_name_provider_identifier` ON `users` (`name`,`provider_identifier`);",
"CREATE UNIQUE INDEX IF NOT EXISTS `idx_name_no_provider_identifier` ON `users` (`name`) WHERE provider_identifier IS NULL;",
} {
err = tx.Exec(idx).Error
if err != nil {
return fmt.Errorf("creating username index: %w", err)
}
}
return nil
},
Rollback: func(db *gorm.DB) error { return nil },

View file

@ -46,7 +46,7 @@ func TestMigrations(t *testing.T) {
routes, err := Read(h.DB, func(rx *gorm.DB) (types.Routes, error) {
return GetRoutes(rx)
})
assert.NoError(t, err)
require.NoError(t, err)
assert.Len(t, routes, 10)
want := types.Routes{
@ -72,7 +72,7 @@ func TestMigrations(t *testing.T) {
routes, err := Read(h.DB, func(rx *gorm.DB) (types.Routes, error) {
return GetRoutes(rx)
})
assert.NoError(t, err)
require.NoError(t, err)
assert.Len(t, routes, 4)
want := types.Routes{
@ -134,7 +134,7 @@ func TestMigrations(t *testing.T) {
return append(kratest, testkra...), nil
})
assert.NoError(t, err)
require.NoError(t, err)
assert.Len(t, keys, 5)
want := []types.PreAuthKey{
@ -179,7 +179,7 @@ func TestMigrations(t *testing.T) {
nodes, err := Read(h.DB, func(rx *gorm.DB) (types.Nodes, error) {
return ListNodes(rx)
})
assert.NoError(t, err)
require.NoError(t, err)
for _, node := range nodes {
assert.Falsef(t, node.MachineKey.IsZero(), "expected non zero machinekey")
@ -271,8 +271,8 @@ func TestConstraints(t *testing.T) {
require.NoError(t, err)
_, err = CreateUser(db, "user1")
require.Error(t, err)
// assert.Contains(t, err.Error(), "UNIQUE constraint failed: users.username")
require.Contains(t, err.Error(), "user already exists")
assert.Contains(t, err.Error(), "UNIQUE constraint failed:")
// require.Contains(t, err.Error(), "user already exists")
},
},
{
@ -295,7 +295,7 @@ func TestConstraints(t *testing.T) {
err = db.Save(&user).Error
require.Error(t, err)
require.Contains(t, err.Error(), "UNIQUE constraint failed: users.provider_identifier")
require.Contains(t, err.Error(), "UNIQUE constraint failed:")
},
},
{
@ -318,7 +318,7 @@ func TestConstraints(t *testing.T) {
err = db.Save(&user).Error
require.Error(t, err)
require.Contains(t, err.Error(), "UNIQUE constraint failed: users.provider_identifier")
require.Contains(t, err.Error(), "UNIQUE constraint failed:")
},
},
{
@ -329,8 +329,8 @@ func TestConstraints(t *testing.T) {
user := types.User{
Name: "user1",
ProviderIdentifier: sql.NullString{String: "http://test.com/user1", Valid: true},
}
user.ProviderIdentifier.String = "http://test.com/user1"
err = db.Save(&user).Error
require.NoError(t, err)
@ -341,8 +341,8 @@ func TestConstraints(t *testing.T) {
run: func(t *testing.T, db *gorm.DB) {
user := types.User{
Name: "user1",
ProviderIdentifier: sql.NullString{String: "http://test.com/user1", Valid: true},
}
user.ProviderIdentifier.String = "http://test.com/user1"
err := db.Save(&user).Error
require.NoError(t, err)
@ -360,7 +360,7 @@ func TestConstraints(t *testing.T) {
t.Fatalf("creating database: %s", err)
}
tt.run(t, db.DB)
tt.run(t, db.DB.Debug())
})
}

View file

@ -12,6 +12,7 @@ import (
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"tailscale.com/net/tsaddr"
"tailscale.com/types/ptr"
)
@ -457,7 +458,12 @@ func TestBackfillIPAddresses(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
db := tt.dbFunc()
alloc, err := NewIPAllocator(db, tt.prefix4, tt.prefix6, types.IPAllocationStrategySequential)
alloc, err := NewIPAllocator(
db,
tt.prefix4,
tt.prefix6,
types.IPAllocationStrategySequential,
)
if err != nil {
t.Fatalf("failed to set up ip alloc: %s", err)
}
@ -482,24 +488,29 @@ func TestBackfillIPAddresses(t *testing.T) {
}
func TestIPAllocatorNextNoReservedIPs(t *testing.T) {
alloc, err := NewIPAllocator(db, ptr.To(tsaddr.CGNATRange()), ptr.To(tsaddr.TailscaleULARange()), types.IPAllocationStrategySequential)
alloc, err := NewIPAllocator(
db,
ptr.To(tsaddr.CGNATRange()),
ptr.To(tsaddr.TailscaleULARange()),
types.IPAllocationStrategySequential,
)
if err != nil {
t.Fatalf("failed to set up ip alloc: %s", err)
}
// Validate that we do not give out 100.100.100.100
nextQuad100, err := alloc.next(na("100.100.100.99"), ptr.To(tsaddr.CGNATRange()))
assert.NoError(t, err)
require.NoError(t, err)
assert.Equal(t, na("100.100.100.101"), *nextQuad100)
// Validate that we do not give out fd7a:115c:a1e0::53
nextQuad100v6, err := alloc.next(na("fd7a:115c:a1e0::52"), ptr.To(tsaddr.TailscaleULARange()))
assert.NoError(t, err)
require.NoError(t, err)
assert.Equal(t, na("fd7a:115c:a1e0::54"), *nextQuad100v6)
// Validate that we do not give out fd7a:115c:a1e0::53
nextChrome, err := alloc.next(na("100.115.91.255"), ptr.To(tsaddr.CGNATRange()))
t.Logf("chrome: %s", nextChrome.String())
assert.NoError(t, err)
require.NoError(t, err)
assert.Equal(t, na("100.115.94.0"), *nextChrome)
}

View file

@ -17,6 +17,7 @@ import (
"github.com/juanfont/headscale/hscontrol/util"
"github.com/puzpuzpuz/xsync/v3"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/check.v1"
"gorm.io/gorm"
"tailscale.com/net/tsaddr"
@ -558,17 +559,17 @@ func TestAutoApproveRoutes(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
adb, err := newTestDB()
assert.NoError(t, err)
require.NoError(t, err)
pol, err := policy.LoadACLPolicyFromBytes([]byte(tt.acl))
assert.NoError(t, err)
require.NoError(t, err)
assert.NotNil(t, pol)
user, err := adb.CreateUser("test")
assert.NoError(t, err)
require.NoError(t, err)
pak, err := adb.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
assert.NoError(t, err)
require.NoError(t, err)
nodeKey := key.NewNode()
machineKey := key.NewMachine()
@ -590,21 +591,21 @@ func TestAutoApproveRoutes(t *testing.T) {
}
trx := adb.DB.Save(&node)
assert.NoError(t, trx.Error)
require.NoError(t, trx.Error)
sendUpdate, err := adb.SaveNodeRoutes(&node)
assert.NoError(t, err)
require.NoError(t, err)
assert.False(t, sendUpdate)
node0ByID, err := adb.GetNodeByID(0)
assert.NoError(t, err)
require.NoError(t, err)
// TODO(kradalby): Check state update
err = adb.EnableAutoApprovedRoutes(pol, node0ByID)
assert.NoError(t, err)
require.NoError(t, err)
enabledRoutes, err := adb.GetEnabledRoutes(node0ByID)
assert.NoError(t, err)
require.NoError(t, err)
assert.Len(t, enabledRoutes, len(tt.want))
tsaddr.SortPrefixes(enabledRoutes)
@ -697,13 +698,13 @@ func TestListEphemeralNodes(t *testing.T) {
}
user, err := db.CreateUser("test")
assert.NoError(t, err)
require.NoError(t, err)
pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil)
assert.NoError(t, err)
require.NoError(t, err)
pakEph, err := db.CreatePreAuthKey(types.UserID(user.ID), false, true, nil, nil)
assert.NoError(t, err)
require.NoError(t, err)
node := types.Node{
ID: 0,
@ -726,16 +727,16 @@ func TestListEphemeralNodes(t *testing.T) {
}
err = db.DB.Save(&node).Error
assert.NoError(t, err)
require.NoError(t, err)
err = db.DB.Save(&nodeEph).Error
assert.NoError(t, err)
require.NoError(t, err)
nodes, err := db.ListNodes()
assert.NoError(t, err)
require.NoError(t, err)
ephemeralNodes, err := db.ListEphemeralNodes()
assert.NoError(t, err)
require.NoError(t, err)
assert.Len(t, nodes, 2)
assert.Len(t, ephemeralNodes, 1)
@ -753,10 +754,10 @@ func TestRenameNode(t *testing.T) {
}
user, err := db.CreateUser("test")
assert.NoError(t, err)
require.NoError(t, err)
user2, err := db.CreateUser("test2")
assert.NoError(t, err)
require.NoError(t, err)
node := types.Node{
ID: 0,
@ -777,10 +778,10 @@ func TestRenameNode(t *testing.T) {
}
err = db.DB.Save(&node).Error
assert.NoError(t, err)
require.NoError(t, err)
err = db.DB.Save(&node2).Error
assert.NoError(t, err)
require.NoError(t, err)
err = db.DB.Transaction(func(tx *gorm.DB) error {
_, err := RegisterNode(tx, node, nil, nil)
@ -790,10 +791,10 @@ func TestRenameNode(t *testing.T) {
_, err = RegisterNode(tx, node2, nil, nil)
return err
})
assert.NoError(t, err)
require.NoError(t, err)
nodes, err := db.ListNodes()
assert.NoError(t, err)
require.NoError(t, err)
assert.Len(t, nodes, 2)
@ -815,26 +816,26 @@ func TestRenameNode(t *testing.T) {
err = db.Write(func(tx *gorm.DB) error {
return RenameNode(tx, nodes[0].ID, "newname")
})
assert.NoError(t, err)
require.NoError(t, err)
nodes, err = db.ListNodes()
assert.NoError(t, err)
require.NoError(t, err)
assert.Len(t, nodes, 2)
assert.Equal(t, nodes[0].Hostname, "test")
assert.Equal(t, nodes[0].GivenName, "newname")
assert.Equal(t, "test", nodes[0].Hostname)
assert.Equal(t, "newname", nodes[0].GivenName)
// Nodes can reuse name that is no longer used
err = db.Write(func(tx *gorm.DB) error {
return RenameNode(tx, nodes[1].ID, "test")
})
assert.NoError(t, err)
require.NoError(t, err)
nodes, err = db.ListNodes()
assert.NoError(t, err)
require.NoError(t, err)
assert.Len(t, nodes, 2)
assert.Equal(t, nodes[0].Hostname, "test")
assert.Equal(t, nodes[0].GivenName, "newname")
assert.Equal(t, nodes[1].GivenName, "test")
assert.Equal(t, "test", nodes[0].Hostname)
assert.Equal(t, "newname", nodes[0].GivenName)
assert.Equal(t, "test", nodes[1].GivenName)
// Nodes cannot be renamed to used names
err = db.Write(func(tx *gorm.DB) error {

View file

@ -90,9 +90,10 @@ func (s *Suite) TestRenameUser(c *check.C) {
c.Assert(err, check.IsNil)
c.Assert(userTest2.Name, check.Equals, "test2")
want := "UNIQUE constraint failed"
err = db.RenameUser(types.UserID(userTest2.ID), "test-renamed")
if !strings.Contains(err.Error(), "UNIQUE constraint failed") {
c.Fatalf("expected failure with unique constraint, got: %s", err.Error())
if err == nil || !strings.Contains(err.Error(), want) {
c.Fatalf("expected failure with unique constraint, want: %q got: %q", want, err)
}
}

View file

@ -4,6 +4,7 @@ import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"strings"
@ -56,6 +57,65 @@ func parseCabailityVersion(req *http.Request) (tailcfg.CapabilityVersion, error)
return tailcfg.CapabilityVersion(clientCapabilityVersion), nil
}
func (h *Headscale) handleVerifyRequest(
req *http.Request,
) (bool, error) {
body, err := io.ReadAll(req.Body)
if err != nil {
return false, fmt.Errorf("cannot read request body: %w", err)
}
var derpAdmitClientRequest tailcfg.DERPAdmitClientRequest
if err := json.Unmarshal(body, &derpAdmitClientRequest); err != nil {
return false, fmt.Errorf("cannot parse derpAdmitClientRequest: %w", err)
}
nodes, err := h.db.ListNodes()
if err != nil {
return false, fmt.Errorf("cannot list nodes: %w", err)
}
return nodes.ContainsNodeKey(derpAdmitClientRequest.NodePublic), nil
}
// see https://github.com/tailscale/tailscale/blob/964282d34f06ecc06ce644769c66b0b31d118340/derp/derp_server.go#L1159, Derp use verifyClientsURL to verify whether a client is allowed to connect to the DERP server.
func (h *Headscale) VerifyHandler(
writer http.ResponseWriter,
req *http.Request,
) {
if req.Method != http.MethodPost {
http.Error(writer, "Wrong method", http.StatusMethodNotAllowed)
return
}
log.Debug().
Str("handler", "/verify").
Msg("verify client")
allow, err := h.handleVerifyRequest(req)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to verify client")
http.Error(writer, "Internal error", http.StatusInternalServerError)
}
resp := tailcfg.DERPAdmitClientResponse{
Allow: allow,
}
writer.Header().Set("Content-Type", "application/json")
writer.WriteHeader(http.StatusOK)
err = json.NewEncoder(writer).Encode(resp)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
}
// KeyHandler provides the Headscale pub key
// Listens in /key.
func (h *Headscale) KeyHandler(

View file

@ -505,7 +505,7 @@ func (a *AuthProviderOIDC) registerNode(
}
// TODO(kradalby):
// Rewrite in elem-go
// Rewrite in elem-go.
func renderOIDCCallbackTemplate(
user *types.User,
) (*bytes.Buffer, error) {

View file

@ -613,7 +613,7 @@ func (pol *ACLPolicy) ExpandAlias(
// TODO(kradalby): It is quite hard to understand what this function is doing,
// it seems like it trying to ensure that we dont include nodes that are tagged
// when we look up the nodes owned by a user.
// This should be refactored to be more clear as part of the Tags work in #1369
// This should be refactored to be more clear as part of the Tags work in #1369.
func excludeCorrectlyTaggedNodes(
aclPolicy *ACLPolicy,
nodes types.Nodes,

View file

@ -11,7 +11,7 @@ import (
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go4.org/netipx"
"gopkg.in/check.v1"
"tailscale.com/net/tsaddr"
@ -1824,12 +1824,20 @@ func TestTheInternet(t *testing.T) {
for i := range internetPrefs {
if internetPrefs[i].String() != hsExitNodeDest[i].IP {
t.Errorf("prefix from internet set %q != hsExit list %q", internetPrefs[i].String(), hsExitNodeDest[i].IP)
t.Errorf(
"prefix from internet set %q != hsExit list %q",
internetPrefs[i].String(),
hsExitNodeDest[i].IP,
)
}
}
if len(internetPrefs) != len(hsExitNodeDest) {
t.Fatalf("expected same length of prefixes, internet: %d, hsExit: %d", len(internetPrefs), len(hsExitNodeDest))
t.Fatalf(
"expected same length of prefixes, internet: %d, hsExit: %d",
len(internetPrefs),
len(hsExitNodeDest),
)
}
}
@ -2036,7 +2044,12 @@ func TestReduceFilterRules(t *testing.T) {
},
want: []tailcfg.FilterRule{
{
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
SrcIPs: []string{
"100.64.0.1/32",
"100.64.0.2/32",
"fd7a:115c:a1e0::1/128",
"fd7a:115c:a1e0::2/128",
},
DstPorts: []tailcfg.NetPortRange{
{
IP: "100.64.0.100/32",
@ -2049,7 +2062,12 @@ func TestReduceFilterRules(t *testing.T) {
},
},
{
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
SrcIPs: []string{
"100.64.0.1/32",
"100.64.0.2/32",
"fd7a:115c:a1e0::1/128",
"fd7a:115c:a1e0::2/128",
},
DstPorts: hsExitNodeDest,
},
},
@ -2132,7 +2150,12 @@ func TestReduceFilterRules(t *testing.T) {
},
want: []tailcfg.FilterRule{
{
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
SrcIPs: []string{
"100.64.0.1/32",
"100.64.0.2/32",
"fd7a:115c:a1e0::1/128",
"fd7a:115c:a1e0::2/128",
},
DstPorts: []tailcfg.NetPortRange{
{
IP: "100.64.0.100/32",
@ -2145,7 +2168,12 @@ func TestReduceFilterRules(t *testing.T) {
},
},
{
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
SrcIPs: []string{
"100.64.0.1/32",
"100.64.0.2/32",
"fd7a:115c:a1e0::1/128",
"fd7a:115c:a1e0::2/128",
},
DstPorts: []tailcfg.NetPortRange{
{IP: "0.0.0.0/5", Ports: tailcfg.PortRangeAny},
{IP: "8.0.0.0/7", Ports: tailcfg.PortRangeAny},
@ -2217,7 +2245,10 @@ func TestReduceFilterRules(t *testing.T) {
IPv6: iap("fd7a:115c:a1e0::100"),
User: types.User{Name: "user100"},
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: []netip.Prefix{netip.MustParsePrefix("8.0.0.0/16"), netip.MustParsePrefix("16.0.0.0/16")},
RoutableIPs: []netip.Prefix{
netip.MustParsePrefix("8.0.0.0/16"),
netip.MustParsePrefix("16.0.0.0/16"),
},
},
},
peers: types.Nodes{
@ -2234,7 +2265,12 @@ func TestReduceFilterRules(t *testing.T) {
},
want: []tailcfg.FilterRule{
{
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
SrcIPs: []string{
"100.64.0.1/32",
"100.64.0.2/32",
"fd7a:115c:a1e0::1/128",
"fd7a:115c:a1e0::2/128",
},
DstPorts: []tailcfg.NetPortRange{
{
IP: "100.64.0.100/32",
@ -2247,7 +2283,12 @@ func TestReduceFilterRules(t *testing.T) {
},
},
{
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
SrcIPs: []string{
"100.64.0.1/32",
"100.64.0.2/32",
"fd7a:115c:a1e0::1/128",
"fd7a:115c:a1e0::2/128",
},
DstPorts: []tailcfg.NetPortRange{
{
IP: "8.0.0.0/8",
@ -2294,7 +2335,10 @@ func TestReduceFilterRules(t *testing.T) {
IPv6: iap("fd7a:115c:a1e0::100"),
User: types.User{Name: "user100"},
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: []netip.Prefix{netip.MustParsePrefix("8.0.0.0/8"), netip.MustParsePrefix("16.0.0.0/8")},
RoutableIPs: []netip.Prefix{
netip.MustParsePrefix("8.0.0.0/8"),
netip.MustParsePrefix("16.0.0.0/8"),
},
},
},
peers: types.Nodes{
@ -2311,7 +2355,12 @@ func TestReduceFilterRules(t *testing.T) {
},
want: []tailcfg.FilterRule{
{
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
SrcIPs: []string{
"100.64.0.1/32",
"100.64.0.2/32",
"fd7a:115c:a1e0::1/128",
"fd7a:115c:a1e0::2/128",
},
DstPorts: []tailcfg.NetPortRange{
{
IP: "100.64.0.100/32",
@ -2324,7 +2373,12 @@ func TestReduceFilterRules(t *testing.T) {
},
},
{
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
SrcIPs: []string{
"100.64.0.1/32",
"100.64.0.2/32",
"fd7a:115c:a1e0::1/128",
"fd7a:115c:a1e0::2/128",
},
DstPorts: []tailcfg.NetPortRange{
{
IP: "8.0.0.0/16",
@ -3299,7 +3353,11 @@ func TestSSHRules(t *testing.T) {
SSHUsers: map[string]string{
"autogroup:nonroot": "=",
},
Action: &tailcfg.SSHAction{Accept: true, AllowAgentForwarding: true, AllowLocalPortForwarding: true},
Action: &tailcfg.SSHAction{
Accept: true,
AllowAgentForwarding: true,
AllowLocalPortForwarding: true,
},
},
{
SSHUsers: map[string]string{
@ -3310,7 +3368,11 @@ func TestSSHRules(t *testing.T) {
Any: true,
},
},
Action: &tailcfg.SSHAction{Accept: true, AllowAgentForwarding: true, AllowLocalPortForwarding: true},
Action: &tailcfg.SSHAction{
Accept: true,
AllowAgentForwarding: true,
AllowLocalPortForwarding: true,
},
},
{
Principals: []*tailcfg.SSHPrincipal{
@ -3321,7 +3383,11 @@ func TestSSHRules(t *testing.T) {
SSHUsers: map[string]string{
"autogroup:nonroot": "=",
},
Action: &tailcfg.SSHAction{Accept: true, AllowAgentForwarding: true, AllowLocalPortForwarding: true},
Action: &tailcfg.SSHAction{
Accept: true,
AllowAgentForwarding: true,
AllowLocalPortForwarding: true,
},
},
{
SSHUsers: map[string]string{
@ -3332,7 +3398,11 @@ func TestSSHRules(t *testing.T) {
Any: true,
},
},
Action: &tailcfg.SSHAction{Accept: true, AllowAgentForwarding: true, AllowLocalPortForwarding: true},
Action: &tailcfg.SSHAction{
Accept: true,
AllowAgentForwarding: true,
AllowLocalPortForwarding: true,
},
},
}},
},
@ -3392,7 +3462,7 @@ func TestSSHRules(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := tt.pol.CompileSSHPolicy(&tt.node, tt.peers)
assert.NoError(t, err)
require.NoError(t, err)
if diff := cmp.Diff(tt.want, got); diff != "" {
t.Errorf("TestSSHRules() unexpected result (-want +got):\n%s", diff)
@ -3499,7 +3569,7 @@ func TestValidExpandTagOwnersInSources(t *testing.T) {
}
got, _, err := GenerateFilterAndSSHRulesForTests(pol, node, types.Nodes{})
assert.NoError(t, err)
require.NoError(t, err)
want := []tailcfg.FilterRule{
{
@ -3550,7 +3620,7 @@ func TestInvalidTagValidUser(t *testing.T) {
}
got, _, err := GenerateFilterAndSSHRulesForTests(pol, node, types.Nodes{})
assert.NoError(t, err)
require.NoError(t, err)
want := []tailcfg.FilterRule{
{
@ -3609,7 +3679,7 @@ func TestValidExpandTagOwnersInDestinations(t *testing.T) {
// c.Assert(rules[0].DstPorts[0].IP, check.Equals, "100.64.0.1/32")
got, _, err := GenerateFilterAndSSHRulesForTests(pol, node, types.Nodes{})
assert.NoError(t, err)
require.NoError(t, err)
want := []tailcfg.FilterRule{
{
@ -3679,7 +3749,7 @@ func TestValidTagInvalidUser(t *testing.T) {
}
got, _, err := GenerateFilterAndSSHRulesForTests(pol, node, types.Nodes{nodes2})
assert.NoError(t, err)
require.NoError(t, err)
want := []tailcfg.FilterRule{
{

View file

@ -13,7 +13,7 @@ func Windows(url string) *elem.Element {
elem.Text("headscale - Windows"),
),
elem.Body(attrs.Props{
attrs.Style : bodyStyle.ToInline(),
attrs.Style: bodyStyle.ToInline(),
},
headerOne("headscale: Windows configuration"),
elem.P(nil,
@ -21,7 +21,8 @@ func Windows(url string) *elem.Element {
elem.A(attrs.Props{
attrs.Href: "https://tailscale.com/download/windows",
attrs.Rel: "noreferrer noopener",
attrs.Target: "_blank"},
attrs.Target: "_blank",
},
elem.Text("Tailscale for Windows ")),
elem.Text("and install it."),
),

View file

@ -28,8 +28,9 @@ const (
maxDuration time.Duration = 1<<63 - 1
)
var errOidcMutuallyExclusive = errors.New(
"oidc_client_secret and oidc_client_secret_path are mutually exclusive",
var (
errOidcMutuallyExclusive = errors.New("oidc_client_secret and oidc_client_secret_path are mutually exclusive")
errServerURLSuffix = errors.New("server_url cannot be part of base_domain in a way that could make the DERP and headscale server unreachable")
)
type IPAllocationStrategy string
@ -835,11 +836,10 @@ func LoadServerConfig() (*Config, error) {
// - DERP run on their own domains
// - Control plane runs on login.tailscale.com/controlplane.tailscale.com
// - MagicDNS (BaseDomain) for users is on a *.ts.net domain per tailnet (e.g. tail-scale.ts.net)
if dnsConfig.BaseDomain != "" &&
strings.Contains(serverURL, dnsConfig.BaseDomain) {
return nil, errors.New(
"server_url cannot contain the base_domain, this will cause the headscale server and embedded DERP to become unreachable from the Tailscale node.",
)
if dnsConfig.BaseDomain != "" {
if err := isSafeServerURL(serverURL, dnsConfig.BaseDomain); err != nil {
return nil, err
}
}
return &Config{
@ -936,6 +936,37 @@ func LoadServerConfig() (*Config, error) {
}, nil
}
// BaseDomain cannot be a suffix of the server URL.
// This is because Tailscale takes over the domain in BaseDomain,
// causing the headscale server and DERP to be unreachable.
// For Tailscale upstream, the following is true:
// - DERP run on their own domains.
// - Control plane runs on login.tailscale.com/controlplane.tailscale.com.
// - MagicDNS (BaseDomain) for users is on a *.ts.net domain per tailnet (e.g. tail-scale.ts.net).
func isSafeServerURL(serverURL, baseDomain string) error {
server, err := url.Parse(serverURL)
if err != nil {
return err
}
serverDomainParts := strings.Split(server.Host, ".")
baseDomainParts := strings.Split(baseDomain, ".")
if len(serverDomainParts) <= len(baseDomainParts) {
return nil
}
s := len(serverDomainParts)
b := len(baseDomainParts)
for i := range len(baseDomainParts) {
if serverDomainParts[s-i-1] != baseDomainParts[b-i-1] {
return nil
}
}
return errServerURLSuffix
}
type deprecator struct {
warns set.Set[string]
fatals set.Set[string]

View file

@ -1,6 +1,7 @@
package types
import (
"fmt"
"os"
"path/filepath"
"testing"
@ -8,6 +9,7 @@ import (
"github.com/google/go-cmp/cmp"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"tailscale.com/tailcfg"
"tailscale.com/types/dnstype"
)
@ -35,8 +37,17 @@ func TestReadConfig(t *testing.T) {
MagicDNS: true,
BaseDomain: "example.com",
Nameservers: Nameservers{
Global: []string{"1.1.1.1", "1.0.0.1", "2606:4700:4700::1111", "2606:4700:4700::1001", "https://dns.nextdns.io/abc123"},
Split: map[string][]string{"darp.headscale.net": {"1.1.1.1", "8.8.8.8"}, "foo.bar.com": {"1.1.1.1"}},
Global: []string{
"1.1.1.1",
"1.0.0.1",
"2606:4700:4700::1111",
"2606:4700:4700::1001",
"https://dns.nextdns.io/abc123",
},
Split: map[string][]string{
"darp.headscale.net": {"1.1.1.1", "8.8.8.8"},
"foo.bar.com": {"1.1.1.1"},
},
},
ExtraRecords: []tailcfg.DNSRecord{
{Name: "grafana.myvpn.example.com", Type: "A", Value: "100.64.0.3"},
@ -91,8 +102,17 @@ func TestReadConfig(t *testing.T) {
MagicDNS: false,
BaseDomain: "example.com",
Nameservers: Nameservers{
Global: []string{"1.1.1.1", "1.0.0.1", "2606:4700:4700::1111", "2606:4700:4700::1001", "https://dns.nextdns.io/abc123"},
Split: map[string][]string{"darp.headscale.net": {"1.1.1.1", "8.8.8.8"}, "foo.bar.com": {"1.1.1.1"}},
Global: []string{
"1.1.1.1",
"1.0.0.1",
"2606:4700:4700::1111",
"2606:4700:4700::1001",
"https://dns.nextdns.io/abc123",
},
Split: map[string][]string{
"darp.headscale.net": {"1.1.1.1", "8.8.8.8"},
"foo.bar.com": {"1.1.1.1"},
},
},
ExtraRecords: []tailcfg.DNSRecord{
{Name: "grafana.myvpn.example.com", Type: "A", Value: "100.64.0.3"},
@ -139,7 +159,7 @@ func TestReadConfig(t *testing.T) {
return LoadServerConfig()
},
want: nil,
wantErr: "server_url cannot contain the base_domain, this will cause the headscale server and embedded DERP to become unreachable from the Tailscale node.",
wantErr: errServerURLSuffix.Error(),
},
{
name: "base-domain-not-in-server-url",
@ -186,7 +206,7 @@ func TestReadConfig(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
viper.Reset()
err := LoadConfig(tt.configPath, true)
assert.NoError(t, err)
require.NoError(t, err)
conf, err := tt.setup(t)
@ -196,7 +216,7 @@ func TestReadConfig(t *testing.T) {
return
}
assert.NoError(t, err)
require.NoError(t, err)
if diff := cmp.Diff(tt.want, conf); diff != "" {
t.Errorf("ReadConfig() mismatch (-want +got):\n%s", diff)
@ -276,10 +296,10 @@ func TestReadConfigFromEnv(t *testing.T) {
viper.Reset()
err := LoadConfig("testdata/minimal.yaml", true)
assert.NoError(t, err)
require.NoError(t, err)
conf, err := tt.setup(t)
assert.NoError(t, err)
require.NoError(t, err)
if diff := cmp.Diff(tt.want, conf); diff != "" {
t.Errorf("ReadConfig() mismatch (-want +got):\n%s", diff)
@ -310,13 +330,25 @@ noise:
// Check configuration validation errors (1)
err = LoadConfig(tmpDir, false)
assert.NoError(t, err)
require.NoError(t, err)
err = validateServerConfig()
assert.Error(t, err)
assert.Contains(t, err.Error(), "Fatal config error: set either tls_letsencrypt_hostname or tls_cert_path/tls_key_path, not both")
assert.Contains(t, err.Error(), "Fatal config error: the only supported values for tls_letsencrypt_challenge_type are")
assert.Contains(t, err.Error(), "Fatal config error: server_url must start with https:// or http://")
require.Error(t, err)
assert.Contains(
t,
err.Error(),
"Fatal config error: set either tls_letsencrypt_hostname or tls_cert_path/tls_key_path, not both",
)
assert.Contains(
t,
err.Error(),
"Fatal config error: the only supported values for tls_letsencrypt_challenge_type are",
)
assert.Contains(
t,
err.Error(),
"Fatal config error: server_url must start with https:// or http://",
)
// Check configuration validation errors (2)
configYaml = []byte(`---
@ -331,5 +363,66 @@ tls_letsencrypt_challenge_type: TLS-ALPN-01
t.Fatalf("Couldn't write file %s", configFilePath)
}
err = LoadConfig(tmpDir, false)
assert.NoError(t, err)
require.NoError(t, err)
}
// OK
// server_url: headscale.com, base: clients.headscale.com
// server_url: headscale.com, base: headscale.net
//
// NOT OK
// server_url: server.headscale.com, base: headscale.com.
func TestSafeServerURL(t *testing.T) {
tests := []struct {
serverURL, baseDomain,
wantErr string
}{
{
serverURL: "https://example.com",
baseDomain: "example.org",
},
{
serverURL: "https://headscale.com",
baseDomain: "headscale.com",
},
{
serverURL: "https://headscale.com",
baseDomain: "clients.headscale.com",
},
{
serverURL: "https://headscale.com",
baseDomain: "clients.subdomain.headscale.com",
},
{
serverURL: "https://headscale.kristoffer.com",
baseDomain: "mybase",
},
{
serverURL: "https://server.headscale.com",
baseDomain: "headscale.com",
wantErr: errServerURLSuffix.Error(),
},
{
serverURL: "https://server.subdomain.headscale.com",
baseDomain: "headscale.com",
wantErr: errServerURLSuffix.Error(),
},
{
serverURL: "http://foo\x00",
wantErr: `parse "http://foo\x00": net/url: invalid control character in URL`,
},
}
for _, tt := range tests {
testName := fmt.Sprintf("server=%s domain=%s", tt.serverURL, tt.baseDomain)
t.Run(testName, func(t *testing.T) {
err := isSafeServerURL(tt.serverURL, tt.baseDomain)
if tt.wantErr != "" {
assert.EqualError(t, err, tt.wantErr)
return
}
assert.NoError(t, err)
})
}
}

View file

@ -223,6 +223,16 @@ func (nodes Nodes) FilterByIP(ip netip.Addr) Nodes {
return found
}
func (nodes Nodes) ContainsNodeKey(nodeKey key.NodePublic) bool {
for _, node := range nodes {
if node.NodeKey == nodeKey {
return true
}
}
return false
}
func (node *Node) Proto() *v1.Node {
nodeProto := &v1.Node{
Id: uint64(node.ID),

View file

@ -8,7 +8,7 @@ prefixes:
database:
type: sqlite3
server_url: "https://derp.no"
server_url: "https://server.derp.no"
dns:
magic_dns: true

View file

@ -27,7 +27,7 @@ type User struct {
// Username for the user, is used if email is empty
// Should not be used, please use Username().
Name string `gorm:"uniqueIndex:idx_name_provider_identifier;index"`
Name string
// Typically the full name of the user
DisplayName string
@ -39,7 +39,7 @@ type User struct {
// Unique identifier of the user from OIDC,
// comes from `sub` claim in the OIDC token
// and is used to lookup the user.
ProviderIdentifier sql.NullString `gorm:"uniqueIndex:idx_name_provider_identifier;uniqueIndex:idx_provider_identifier"`
ProviderIdentifier sql.NullString
// Provider is the origin of the user account,
// same as RegistrationMethod, without authkey.

View file

@ -4,12 +4,13 @@ import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestGenerateRandomStringDNSSafe(t *testing.T) {
for i := 0; i < 100000; i++ {
str, err := GenerateRandomStringDNSSafe(8)
assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, str, 8)
}
}

View file

@ -11,10 +11,10 @@ Tests are located in files ending with `_test.go` and the framework are located
## Running integration tests locally
The easiest way to run tests locally is to use `[act](INSERT LINK)`, a local GitHub Actions runner:
The easiest way to run tests locally is to use [act](https://github.com/nektos/act), a local GitHub Actions runner:
```
act pull_request -W .github/workflows/test-integration-v2-TestPingAllByIP.yaml
act pull_request -W .github/workflows/test-integration.yaml
```
Alternatively, the `docker run` command in each GitHub workflow file can be used.

View file

@ -12,6 +12,7 @@ import (
"github.com/juanfont/headscale/integration/hsic"
"github.com/juanfont/headscale/integration/tsic"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var veryLargeDestination = []string{
@ -54,7 +55,7 @@ func aclScenario(
) *Scenario {
t.Helper()
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
spec := map[string]int{
"user1": clientsPerUser,
@ -77,10 +78,10 @@ func aclScenario(
hsic.WithACLPolicy(policy),
hsic.WithTestName("acl"),
)
assertNoErr(t, err)
require.NoError(t, err)
_, err = scenario.ListTailscaleClientsFQDNs()
assertNoErrListFQDN(t, err)
require.NoError(t, err)
return scenario
}
@ -267,7 +268,7 @@ func TestACLHostsInNetMapTable(t *testing.T) {
for name, testCase := range tests {
t.Run(name, func(t *testing.T) {
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
spec := testCase.users
@ -275,22 +276,22 @@ func TestACLHostsInNetMapTable(t *testing.T) {
[]tsic.Option{},
hsic.WithACLPolicy(&testCase.policy),
)
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
allClients, err := scenario.ListTailscaleClients()
assertNoErr(t, err)
require.NoError(t, err)
err = scenario.WaitForTailscaleSyncWithPeerCount(testCase.want["user1"])
assertNoErrSync(t, err)
require.NoError(t, err)
for _, client := range allClients {
status, err := client.Status()
assertNoErr(t, err)
require.NoError(t, err)
user := status.User[status.Self.UserID].LoginName
assert.Equal(t, (testCase.want[user]), len(status.Peer))
assert.Len(t, status.Peer, (testCase.want[user]))
}
})
}
@ -319,23 +320,23 @@ func TestACLAllowUser80Dst(t *testing.T) {
defer scenario.ShutdownAssertNoPanics(t)
user1Clients, err := scenario.ListTailscaleClients("user1")
assertNoErr(t, err)
require.NoError(t, err)
user2Clients, err := scenario.ListTailscaleClients("user2")
assertNoErr(t, err)
require.NoError(t, err)
// Test that user1 can visit all user2
for _, client := range user1Clients {
for _, peer := range user2Clients {
fqdn, err := peer.FQDN()
assertNoErr(t, err)
require.NoError(t, err)
url := fmt.Sprintf("http://%s/etc/hostname", fqdn)
t.Logf("url from %s to %s", client.Hostname(), url)
result, err := client.Curl(url)
assert.Len(t, result, 13)
assertNoErr(t, err)
require.NoError(t, err)
}
}
@ -343,14 +344,14 @@ func TestACLAllowUser80Dst(t *testing.T) {
for _, client := range user2Clients {
for _, peer := range user1Clients {
fqdn, err := peer.FQDN()
assertNoErr(t, err)
require.NoError(t, err)
url := fmt.Sprintf("http://%s/etc/hostname", fqdn)
t.Logf("url from %s to %s", client.Hostname(), url)
result, err := client.Curl(url)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
}
}
}
@ -376,10 +377,10 @@ func TestACLDenyAllPort80(t *testing.T) {
defer scenario.ShutdownAssertNoPanics(t)
allClients, err := scenario.ListTailscaleClients()
assertNoErr(t, err)
require.NoError(t, err)
allHostnames, err := scenario.ListTailscaleClientsFQDNs()
assertNoErr(t, err)
require.NoError(t, err)
for _, client := range allClients {
for _, hostname := range allHostnames {
@ -394,7 +395,7 @@ func TestACLDenyAllPort80(t *testing.T) {
result, err := client.Curl(url)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
}
}
}
@ -420,23 +421,23 @@ func TestACLAllowUserDst(t *testing.T) {
defer scenario.ShutdownAssertNoPanics(t)
user1Clients, err := scenario.ListTailscaleClients("user1")
assertNoErr(t, err)
require.NoError(t, err)
user2Clients, err := scenario.ListTailscaleClients("user2")
assertNoErr(t, err)
require.NoError(t, err)
// Test that user1 can visit all user2
for _, client := range user1Clients {
for _, peer := range user2Clients {
fqdn, err := peer.FQDN()
assertNoErr(t, err)
require.NoError(t, err)
url := fmt.Sprintf("http://%s/etc/hostname", fqdn)
t.Logf("url from %s to %s", client.Hostname(), url)
result, err := client.Curl(url)
assert.Len(t, result, 13)
assertNoErr(t, err)
require.NoError(t, err)
}
}
@ -444,14 +445,14 @@ func TestACLAllowUserDst(t *testing.T) {
for _, client := range user2Clients {
for _, peer := range user1Clients {
fqdn, err := peer.FQDN()
assertNoErr(t, err)
require.NoError(t, err)
url := fmt.Sprintf("http://%s/etc/hostname", fqdn)
t.Logf("url from %s to %s", client.Hostname(), url)
result, err := client.Curl(url)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
}
}
}
@ -476,23 +477,23 @@ func TestACLAllowStarDst(t *testing.T) {
defer scenario.ShutdownAssertNoPanics(t)
user1Clients, err := scenario.ListTailscaleClients("user1")
assertNoErr(t, err)
require.NoError(t, err)
user2Clients, err := scenario.ListTailscaleClients("user2")
assertNoErr(t, err)
require.NoError(t, err)
// Test that user1 can visit all user2
for _, client := range user1Clients {
for _, peer := range user2Clients {
fqdn, err := peer.FQDN()
assertNoErr(t, err)
require.NoError(t, err)
url := fmt.Sprintf("http://%s/etc/hostname", fqdn)
t.Logf("url from %s to %s", client.Hostname(), url)
result, err := client.Curl(url)
assert.Len(t, result, 13)
assertNoErr(t, err)
require.NoError(t, err)
}
}
@ -500,14 +501,14 @@ func TestACLAllowStarDst(t *testing.T) {
for _, client := range user2Clients {
for _, peer := range user1Clients {
fqdn, err := peer.FQDN()
assertNoErr(t, err)
require.NoError(t, err)
url := fmt.Sprintf("http://%s/etc/hostname", fqdn)
t.Logf("url from %s to %s", client.Hostname(), url)
result, err := client.Curl(url)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
}
}
}
@ -537,23 +538,23 @@ func TestACLNamedHostsCanReachBySubnet(t *testing.T) {
defer scenario.ShutdownAssertNoPanics(t)
user1Clients, err := scenario.ListTailscaleClients("user1")
assertNoErr(t, err)
require.NoError(t, err)
user2Clients, err := scenario.ListTailscaleClients("user2")
assertNoErr(t, err)
require.NoError(t, err)
// Test that user1 can visit all user2
for _, client := range user1Clients {
for _, peer := range user2Clients {
fqdn, err := peer.FQDN()
assertNoErr(t, err)
require.NoError(t, err)
url := fmt.Sprintf("http://%s/etc/hostname", fqdn)
t.Logf("url from %s to %s", client.Hostname(), url)
result, err := client.Curl(url)
assert.Len(t, result, 13)
assertNoErr(t, err)
require.NoError(t, err)
}
}
@ -561,14 +562,14 @@ func TestACLNamedHostsCanReachBySubnet(t *testing.T) {
for _, client := range user2Clients {
for _, peer := range user1Clients {
fqdn, err := peer.FQDN()
assertNoErr(t, err)
require.NoError(t, err)
url := fmt.Sprintf("http://%s/etc/hostname", fqdn)
t.Logf("url from %s to %s", client.Hostname(), url)
result, err := client.Curl(url)
assert.Len(t, result, 13)
assertNoErr(t, err)
require.NoError(t, err)
}
}
}
@ -679,10 +680,10 @@ func TestACLNamedHostsCanReach(t *testing.T) {
test1ip4 := netip.MustParseAddr("100.64.0.1")
test1ip6 := netip.MustParseAddr("fd7a:115c:a1e0::1")
test1, err := scenario.FindTailscaleClientByIP(test1ip6)
assertNoErr(t, err)
require.NoError(t, err)
test1fqdn, err := test1.FQDN()
assertNoErr(t, err)
require.NoError(t, err)
test1ip4URL := fmt.Sprintf("http://%s/etc/hostname", test1ip4.String())
test1ip6URL := fmt.Sprintf("http://[%s]/etc/hostname", test1ip6.String())
test1fqdnURL := fmt.Sprintf("http://%s/etc/hostname", test1fqdn)
@ -690,10 +691,10 @@ func TestACLNamedHostsCanReach(t *testing.T) {
test2ip4 := netip.MustParseAddr("100.64.0.2")
test2ip6 := netip.MustParseAddr("fd7a:115c:a1e0::2")
test2, err := scenario.FindTailscaleClientByIP(test2ip6)
assertNoErr(t, err)
require.NoError(t, err)
test2fqdn, err := test2.FQDN()
assertNoErr(t, err)
require.NoError(t, err)
test2ip4URL := fmt.Sprintf("http://%s/etc/hostname", test2ip4.String())
test2ip6URL := fmt.Sprintf("http://[%s]/etc/hostname", test2ip6.String())
test2fqdnURL := fmt.Sprintf("http://%s/etc/hostname", test2fqdn)
@ -701,10 +702,10 @@ func TestACLNamedHostsCanReach(t *testing.T) {
test3ip4 := netip.MustParseAddr("100.64.0.3")
test3ip6 := netip.MustParseAddr("fd7a:115c:a1e0::3")
test3, err := scenario.FindTailscaleClientByIP(test3ip6)
assertNoErr(t, err)
require.NoError(t, err)
test3fqdn, err := test3.FQDN()
assertNoErr(t, err)
require.NoError(t, err)
test3ip4URL := fmt.Sprintf("http://%s/etc/hostname", test3ip4.String())
test3ip6URL := fmt.Sprintf("http://[%s]/etc/hostname", test3ip6.String())
test3fqdnURL := fmt.Sprintf("http://%s/etc/hostname", test3fqdn)
@ -719,7 +720,7 @@ func TestACLNamedHostsCanReach(t *testing.T) {
test3ip4URL,
result,
)
assertNoErr(t, err)
require.NoError(t, err)
result, err = test1.Curl(test3ip6URL)
assert.Lenf(
@ -730,7 +731,7 @@ func TestACLNamedHostsCanReach(t *testing.T) {
test3ip6URL,
result,
)
assertNoErr(t, err)
require.NoError(t, err)
result, err = test1.Curl(test3fqdnURL)
assert.Lenf(
@ -741,7 +742,7 @@ func TestACLNamedHostsCanReach(t *testing.T) {
test3fqdnURL,
result,
)
assertNoErr(t, err)
require.NoError(t, err)
// test2 can query test3
result, err = test2.Curl(test3ip4URL)
@ -753,7 +754,7 @@ func TestACLNamedHostsCanReach(t *testing.T) {
test3ip4URL,
result,
)
assertNoErr(t, err)
require.NoError(t, err)
result, err = test2.Curl(test3ip6URL)
assert.Lenf(
@ -764,7 +765,7 @@ func TestACLNamedHostsCanReach(t *testing.T) {
test3ip6URL,
result,
)
assertNoErr(t, err)
require.NoError(t, err)
result, err = test2.Curl(test3fqdnURL)
assert.Lenf(
@ -775,33 +776,33 @@ func TestACLNamedHostsCanReach(t *testing.T) {
test3fqdnURL,
result,
)
assertNoErr(t, err)
require.NoError(t, err)
// test3 cannot query test1
result, err = test3.Curl(test1ip4URL)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
result, err = test3.Curl(test1ip6URL)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
result, err = test3.Curl(test1fqdnURL)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
// test3 cannot query test2
result, err = test3.Curl(test2ip4URL)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
result, err = test3.Curl(test2ip6URL)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
result, err = test3.Curl(test2fqdnURL)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
// test1 can query test2
result, err = test1.Curl(test2ip4URL)
@ -814,7 +815,7 @@ func TestACLNamedHostsCanReach(t *testing.T) {
result,
)
assertNoErr(t, err)
require.NoError(t, err)
result, err = test1.Curl(test2ip6URL)
assert.Lenf(
t,
@ -824,7 +825,7 @@ func TestACLNamedHostsCanReach(t *testing.T) {
test2ip6URL,
result,
)
assertNoErr(t, err)
require.NoError(t, err)
result, err = test1.Curl(test2fqdnURL)
assert.Lenf(
@ -835,20 +836,20 @@ func TestACLNamedHostsCanReach(t *testing.T) {
test2fqdnURL,
result,
)
assertNoErr(t, err)
require.NoError(t, err)
// test2 cannot query test1
result, err = test2.Curl(test1ip4URL)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
result, err = test2.Curl(test1ip6URL)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
result, err = test2.Curl(test1fqdnURL)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
})
}
}
@ -946,10 +947,10 @@ func TestACLDevice1CanAccessDevice2(t *testing.T) {
test1ip6 := netip.MustParseAddr("fd7a:115c:a1e0::1")
test1, err := scenario.FindTailscaleClientByIP(test1ip)
assert.NotNil(t, test1)
assertNoErr(t, err)
require.NoError(t, err)
test1fqdn, err := test1.FQDN()
assertNoErr(t, err)
require.NoError(t, err)
test1ipURL := fmt.Sprintf("http://%s/etc/hostname", test1ip.String())
test1ip6URL := fmt.Sprintf("http://[%s]/etc/hostname", test1ip6.String())
test1fqdnURL := fmt.Sprintf("http://%s/etc/hostname", test1fqdn)
@ -958,10 +959,10 @@ func TestACLDevice1CanAccessDevice2(t *testing.T) {
test2ip6 := netip.MustParseAddr("fd7a:115c:a1e0::2")
test2, err := scenario.FindTailscaleClientByIP(test2ip)
assert.NotNil(t, test2)
assertNoErr(t, err)
require.NoError(t, err)
test2fqdn, err := test2.FQDN()
assertNoErr(t, err)
require.NoError(t, err)
test2ipURL := fmt.Sprintf("http://%s/etc/hostname", test2ip.String())
test2ip6URL := fmt.Sprintf("http://[%s]/etc/hostname", test2ip6.String())
test2fqdnURL := fmt.Sprintf("http://%s/etc/hostname", test2fqdn)
@ -976,7 +977,7 @@ func TestACLDevice1CanAccessDevice2(t *testing.T) {
test2ipURL,
result,
)
assertNoErr(t, err)
require.NoError(t, err)
result, err = test1.Curl(test2ip6URL)
assert.Lenf(
@ -987,7 +988,7 @@ func TestACLDevice1CanAccessDevice2(t *testing.T) {
test2ip6URL,
result,
)
assertNoErr(t, err)
require.NoError(t, err)
result, err = test1.Curl(test2fqdnURL)
assert.Lenf(
@ -998,19 +999,19 @@ func TestACLDevice1CanAccessDevice2(t *testing.T) {
test2fqdnURL,
result,
)
assertNoErr(t, err)
require.NoError(t, err)
result, err = test2.Curl(test1ipURL)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
result, err = test2.Curl(test1ip6URL)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
result, err = test2.Curl(test1fqdnURL)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
})
}
}
@ -1020,7 +1021,7 @@ func TestPolicyUpdateWhileRunningWithCLIInDatabase(t *testing.T) {
t.Parallel()
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
@ -1046,19 +1047,19 @@ func TestPolicyUpdateWhileRunningWithCLIInDatabase(t *testing.T) {
"HEADSCALE_POLICY_MODE": "database",
}),
)
assertNoErr(t, err)
require.NoError(t, err)
_, err = scenario.ListTailscaleClientsFQDNs()
assertNoErrListFQDN(t, err)
require.NoError(t, err)
err = scenario.WaitForTailscaleSync()
assertNoErrSync(t, err)
require.NoError(t, err)
user1Clients, err := scenario.ListTailscaleClients("user1")
assertNoErr(t, err)
require.NoError(t, err)
user2Clients, err := scenario.ListTailscaleClients("user2")
assertNoErr(t, err)
require.NoError(t, err)
all := append(user1Clients, user2Clients...)
@ -1070,19 +1071,19 @@ func TestPolicyUpdateWhileRunningWithCLIInDatabase(t *testing.T) {
}
fqdn, err := peer.FQDN()
assertNoErr(t, err)
require.NoError(t, err)
url := fmt.Sprintf("http://%s/etc/hostname", fqdn)
t.Logf("url from %s to %s", client.Hostname(), url)
result, err := client.Curl(url)
assert.Len(t, result, 13)
assertNoErr(t, err)
require.NoError(t, err)
}
}
headscale, err := scenario.Headscale()
assertNoErr(t, err)
require.NoError(t, err)
p := policy.ACLPolicy{
ACLs: []policy.ACL{
@ -1100,7 +1101,7 @@ func TestPolicyUpdateWhileRunningWithCLIInDatabase(t *testing.T) {
policyFilePath := "/etc/headscale/policy.json"
err = headscale.WriteFile(policyFilePath, pBytes)
assertNoErr(t, err)
require.NoError(t, err)
// No policy is present at this time.
// Add a new policy from a file.
@ -1113,7 +1114,7 @@ func TestPolicyUpdateWhileRunningWithCLIInDatabase(t *testing.T) {
policyFilePath,
},
)
assertNoErr(t, err)
require.NoError(t, err)
// Get the current policy and check
// if it is the same as the one we set.
@ -1129,7 +1130,7 @@ func TestPolicyUpdateWhileRunningWithCLIInDatabase(t *testing.T) {
},
&output,
)
assertNoErr(t, err)
require.NoError(t, err)
assert.Len(t, output.ACLs, 1)
@ -1141,14 +1142,14 @@ func TestPolicyUpdateWhileRunningWithCLIInDatabase(t *testing.T) {
for _, client := range user1Clients {
for _, peer := range user2Clients {
fqdn, err := peer.FQDN()
assertNoErr(t, err)
require.NoError(t, err)
url := fmt.Sprintf("http://%s/etc/hostname", fqdn)
t.Logf("url from %s to %s", client.Hostname(), url)
result, err := client.Curl(url)
assert.Len(t, result, 13)
assertNoErr(t, err)
require.NoError(t, err)
}
}
@ -1156,14 +1157,14 @@ func TestPolicyUpdateWhileRunningWithCLIInDatabase(t *testing.T) {
for _, client := range user2Clients {
for _, peer := range user1Clients {
fqdn, err := peer.FQDN()
assertNoErr(t, err)
require.NoError(t, err)
url := fmt.Sprintf("http://%s/etc/hostname", fqdn)
t.Logf("url from %s to %s", client.Hostname(), url)
result, err := client.Curl(url)
assert.Empty(t, result)
assert.Error(t, err)
require.Error(t, err)
}
}
}

View file

@ -13,6 +13,7 @@ import (
"github.com/juanfont/headscale/integration/hsic"
"github.com/juanfont/headscale/integration/tsic"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func executeAndUnmarshal[T any](headscale ControlServer, command []string, result T) error {
@ -34,7 +35,7 @@ func TestUserCommand(t *testing.T) {
t.Parallel()
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
@ -43,10 +44,10 @@ func TestUserCommand(t *testing.T) {
}
err = scenario.CreateHeadscaleEnv(spec, []tsic.Option{}, hsic.WithTestName("clins"))
assertNoErr(t, err)
require.NoError(t, err)
headscale, err := scenario.Headscale()
assertNoErr(t, err)
require.NoError(t, err)
var listUsers []v1.User
err = executeAndUnmarshal(headscale,
@ -59,7 +60,7 @@ func TestUserCommand(t *testing.T) {
},
&listUsers,
)
assertNoErr(t, err)
require.NoError(t, err)
result := []string{listUsers[0].GetName(), listUsers[1].GetName()}
sort.Strings(result)
@ -81,7 +82,7 @@ func TestUserCommand(t *testing.T) {
"newname",
},
)
assertNoErr(t, err)
require.NoError(t, err)
var listAfterRenameUsers []v1.User
err = executeAndUnmarshal(headscale,
@ -94,7 +95,7 @@ func TestUserCommand(t *testing.T) {
},
&listAfterRenameUsers,
)
assertNoErr(t, err)
require.NoError(t, err)
result = []string{listAfterRenameUsers[0].GetName(), listAfterRenameUsers[1].GetName()}
sort.Strings(result)
@ -114,7 +115,7 @@ func TestPreAuthKeyCommand(t *testing.T) {
count := 3
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
@ -122,13 +123,13 @@ func TestPreAuthKeyCommand(t *testing.T) {
}
err = scenario.CreateHeadscaleEnv(spec, []tsic.Option{}, hsic.WithTestName("clipak"))
assertNoErr(t, err)
require.NoError(t, err)
headscale, err := scenario.Headscale()
assertNoErr(t, err)
require.NoError(t, err)
keys := make([]*v1.PreAuthKey, count)
assertNoErr(t, err)
require.NoError(t, err)
for index := 0; index < count; index++ {
var preAuthKey v1.PreAuthKey
@ -150,7 +151,7 @@ func TestPreAuthKeyCommand(t *testing.T) {
},
&preAuthKey,
)
assertNoErr(t, err)
require.NoError(t, err)
keys[index] = &preAuthKey
}
@ -171,7 +172,7 @@ func TestPreAuthKeyCommand(t *testing.T) {
},
&listedPreAuthKeys,
)
assertNoErr(t, err)
require.NoError(t, err)
// There is one key created by "scenario.CreateHeadscaleEnv"
assert.Len(t, listedPreAuthKeys, 4)
@ -228,7 +229,7 @@ func TestPreAuthKeyCommand(t *testing.T) {
listedPreAuthKeys[1].GetKey(),
},
)
assertNoErr(t, err)
require.NoError(t, err)
var listedPreAuthKeysAfterExpire []v1.PreAuthKey
err = executeAndUnmarshal(
@ -244,7 +245,7 @@ func TestPreAuthKeyCommand(t *testing.T) {
},
&listedPreAuthKeysAfterExpire,
)
assertNoErr(t, err)
require.NoError(t, err)
assert.True(t, listedPreAuthKeysAfterExpire[1].GetExpiration().AsTime().Before(time.Now()))
assert.True(t, listedPreAuthKeysAfterExpire[2].GetExpiration().AsTime().After(time.Now()))
@ -258,7 +259,7 @@ func TestPreAuthKeyCommandWithoutExpiry(t *testing.T) {
user := "pre-auth-key-without-exp-user"
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
@ -266,10 +267,10 @@ func TestPreAuthKeyCommandWithoutExpiry(t *testing.T) {
}
err = scenario.CreateHeadscaleEnv(spec, []tsic.Option{}, hsic.WithTestName("clipaknaexp"))
assertNoErr(t, err)
require.NoError(t, err)
headscale, err := scenario.Headscale()
assertNoErr(t, err)
require.NoError(t, err)
var preAuthKey v1.PreAuthKey
err = executeAndUnmarshal(
@ -286,7 +287,7 @@ func TestPreAuthKeyCommandWithoutExpiry(t *testing.T) {
},
&preAuthKey,
)
assertNoErr(t, err)
require.NoError(t, err)
var listedPreAuthKeys []v1.PreAuthKey
err = executeAndUnmarshal(
@ -302,7 +303,7 @@ func TestPreAuthKeyCommandWithoutExpiry(t *testing.T) {
},
&listedPreAuthKeys,
)
assertNoErr(t, err)
require.NoError(t, err)
// There is one key created by "scenario.CreateHeadscaleEnv"
assert.Len(t, listedPreAuthKeys, 2)
@ -321,7 +322,7 @@ func TestPreAuthKeyCommandReusableEphemeral(t *testing.T) {
user := "pre-auth-key-reus-ephm-user"
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
@ -329,10 +330,10 @@ func TestPreAuthKeyCommandReusableEphemeral(t *testing.T) {
}
err = scenario.CreateHeadscaleEnv(spec, []tsic.Option{}, hsic.WithTestName("clipakresueeph"))
assertNoErr(t, err)
require.NoError(t, err)
headscale, err := scenario.Headscale()
assertNoErr(t, err)
require.NoError(t, err)
var preAuthReusableKey v1.PreAuthKey
err = executeAndUnmarshal(
@ -349,7 +350,7 @@ func TestPreAuthKeyCommandReusableEphemeral(t *testing.T) {
},
&preAuthReusableKey,
)
assertNoErr(t, err)
require.NoError(t, err)
var preAuthEphemeralKey v1.PreAuthKey
err = executeAndUnmarshal(
@ -366,7 +367,7 @@ func TestPreAuthKeyCommandReusableEphemeral(t *testing.T) {
},
&preAuthEphemeralKey,
)
assertNoErr(t, err)
require.NoError(t, err)
assert.True(t, preAuthEphemeralKey.GetEphemeral())
assert.False(t, preAuthEphemeralKey.GetReusable())
@ -385,7 +386,7 @@ func TestPreAuthKeyCommandReusableEphemeral(t *testing.T) {
},
&listedPreAuthKeys,
)
assertNoErr(t, err)
require.NoError(t, err)
// There is one key created by "scenario.CreateHeadscaleEnv"
assert.Len(t, listedPreAuthKeys, 3)
@ -399,7 +400,7 @@ func TestPreAuthKeyCorrectUserLoggedInCommand(t *testing.T) {
user2 := "user2"
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
@ -415,10 +416,10 @@ func TestPreAuthKeyCorrectUserLoggedInCommand(t *testing.T) {
hsic.WithTLS(),
hsic.WithHostnameAsServerURL(),
)
assertNoErr(t, err)
require.NoError(t, err)
headscale, err := scenario.Headscale()
assertNoErr(t, err)
require.NoError(t, err)
var user2Key v1.PreAuthKey
@ -440,10 +441,10 @@ func TestPreAuthKeyCorrectUserLoggedInCommand(t *testing.T) {
},
&user2Key,
)
assertNoErr(t, err)
require.NoError(t, err)
allClients, err := scenario.ListTailscaleClients()
assertNoErrListClients(t, err)
require.NoError(t, err)
assert.Len(t, allClients, 1)
@ -451,22 +452,22 @@ func TestPreAuthKeyCorrectUserLoggedInCommand(t *testing.T) {
// Log out from user1
err = client.Logout()
assertNoErr(t, err)
require.NoError(t, err)
err = scenario.WaitForTailscaleLogout()
assertNoErr(t, err)
require.NoError(t, err)
status, err := client.Status()
assertNoErr(t, err)
require.NoError(t, err)
if status.BackendState == "Starting" || status.BackendState == "Running" {
t.Fatalf("expected node to be logged out, backend state: %s", status.BackendState)
}
err = client.Login(headscale.GetEndpoint(), user2Key.GetKey())
assertNoErr(t, err)
require.NoError(t, err)
status, err = client.Status()
assertNoErr(t, err)
require.NoError(t, err)
if status.BackendState != "Running" {
t.Fatalf("expected node to be logged in, backend state: %s", status.BackendState)
}
@ -487,7 +488,7 @@ func TestPreAuthKeyCorrectUserLoggedInCommand(t *testing.T) {
},
&listNodes,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, listNodes, 1)
assert.Equal(t, "user2", listNodes[0].GetUser().GetName())
@ -500,7 +501,7 @@ func TestApiKeyCommand(t *testing.T) {
count := 5
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
@ -509,10 +510,10 @@ func TestApiKeyCommand(t *testing.T) {
}
err = scenario.CreateHeadscaleEnv(spec, []tsic.Option{}, hsic.WithTestName("clins"))
assertNoErr(t, err)
require.NoError(t, err)
headscale, err := scenario.Headscale()
assertNoErr(t, err)
require.NoError(t, err)
keys := make([]string, count)
@ -528,7 +529,7 @@ func TestApiKeyCommand(t *testing.T) {
"json",
},
)
assert.Nil(t, err)
require.NoError(t, err)
assert.NotEmpty(t, apiResult)
keys[idx] = apiResult
@ -547,7 +548,7 @@ func TestApiKeyCommand(t *testing.T) {
},
&listedAPIKeys,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, listedAPIKeys, 5)
@ -603,7 +604,7 @@ func TestApiKeyCommand(t *testing.T) {
listedAPIKeys[idx].GetPrefix(),
},
)
assert.Nil(t, err)
require.NoError(t, err)
expiredPrefixes[listedAPIKeys[idx].GetPrefix()] = true
}
@ -619,7 +620,7 @@ func TestApiKeyCommand(t *testing.T) {
},
&listedAfterExpireAPIKeys,
)
assert.Nil(t, err)
require.NoError(t, err)
for index := range listedAfterExpireAPIKeys {
if _, ok := expiredPrefixes[listedAfterExpireAPIKeys[index].GetPrefix()]; ok {
@ -645,7 +646,7 @@ func TestApiKeyCommand(t *testing.T) {
"--prefix",
listedAPIKeys[0].GetPrefix(),
})
assert.Nil(t, err)
require.NoError(t, err)
var listedAPIKeysAfterDelete []v1.ApiKey
err = executeAndUnmarshal(headscale,
@ -658,7 +659,7 @@ func TestApiKeyCommand(t *testing.T) {
},
&listedAPIKeysAfterDelete,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, listedAPIKeysAfterDelete, 4)
}
@ -668,7 +669,7 @@ func TestNodeTagCommand(t *testing.T) {
t.Parallel()
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
@ -676,17 +677,17 @@ func TestNodeTagCommand(t *testing.T) {
}
err = scenario.CreateHeadscaleEnv(spec, []tsic.Option{}, hsic.WithTestName("clins"))
assertNoErr(t, err)
require.NoError(t, err)
headscale, err := scenario.Headscale()
assertNoErr(t, err)
require.NoError(t, err)
machineKeys := []string{
"mkey:9b2ffa7e08cc421a3d2cca9012280f6a236fd0de0b4ce005b30a98ad930306fe",
"mkey:6abd00bb5fdda622db51387088c68e97e71ce58e7056aa54f592b6a8219d524c",
}
nodes := make([]*v1.Node, len(machineKeys))
assert.Nil(t, err)
require.NoError(t, err)
for index, machineKey := range machineKeys {
_, err := headscale.Execute(
@ -704,7 +705,7 @@ func TestNodeTagCommand(t *testing.T) {
"json",
},
)
assert.Nil(t, err)
require.NoError(t, err)
var node v1.Node
err = executeAndUnmarshal(
@ -722,7 +723,7 @@ func TestNodeTagCommand(t *testing.T) {
},
&node,
)
assert.Nil(t, err)
require.NoError(t, err)
nodes[index] = &node
}
@ -741,7 +742,7 @@ func TestNodeTagCommand(t *testing.T) {
},
&node,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Equal(t, []string{"tag:test"}, node.GetForcedTags())
@ -755,7 +756,7 @@ func TestNodeTagCommand(t *testing.T) {
"--output", "json",
},
)
assert.ErrorContains(t, err, "tag must start with the string 'tag:'")
require.ErrorContains(t, err, "tag must start with the string 'tag:'")
// Test list all nodes after added seconds
resultMachines := make([]*v1.Node, len(machineKeys))
@ -769,7 +770,7 @@ func TestNodeTagCommand(t *testing.T) {
},
&resultMachines,
)
assert.Nil(t, err)
require.NoError(t, err)
found := false
for _, node := range resultMachines {
if node.GetForcedTags() != nil {
@ -780,9 +781,8 @@ func TestNodeTagCommand(t *testing.T) {
}
}
}
assert.Equal(
assert.True(
t,
true,
found,
"should find a node with the tag 'tag:test' in the list of nodes",
)
@ -793,18 +793,22 @@ func TestNodeAdvertiseTagNoACLCommand(t *testing.T) {
t.Parallel()
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
"user1": 1,
}
err = scenario.CreateHeadscaleEnv(spec, []tsic.Option{tsic.WithTags([]string{"tag:test"})}, hsic.WithTestName("cliadvtags"))
assertNoErr(t, err)
err = scenario.CreateHeadscaleEnv(
spec,
[]tsic.Option{tsic.WithTags([]string{"tag:test"})},
hsic.WithTestName("cliadvtags"),
)
require.NoError(t, err)
headscale, err := scenario.Headscale()
assertNoErr(t, err)
require.NoError(t, err)
// Test list all nodes after added seconds
resultMachines := make([]*v1.Node, spec["user1"])
@ -819,7 +823,7 @@ func TestNodeAdvertiseTagNoACLCommand(t *testing.T) {
},
&resultMachines,
)
assert.Nil(t, err)
require.NoError(t, err)
found := false
for _, node := range resultMachines {
if node.GetInvalidTags() != nil {
@ -830,9 +834,8 @@ func TestNodeAdvertiseTagNoACLCommand(t *testing.T) {
}
}
}
assert.Equal(
assert.True(
t,
true,
found,
"should not find a node with the tag 'tag:test' in the list of nodes",
)
@ -843,14 +846,18 @@ func TestNodeAdvertiseTagWithACLCommand(t *testing.T) {
t.Parallel()
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
"user1": 1,
}
err = scenario.CreateHeadscaleEnv(spec, []tsic.Option{tsic.WithTags([]string{"tag:exists"})}, hsic.WithTestName("cliadvtags"), hsic.WithACLPolicy(
err = scenario.CreateHeadscaleEnv(
spec,
[]tsic.Option{tsic.WithTags([]string{"tag:exists"})},
hsic.WithTestName("cliadvtags"),
hsic.WithACLPolicy(
&policy.ACLPolicy{
ACLs: []policy.ACL{
{
@ -863,11 +870,12 @@ func TestNodeAdvertiseTagWithACLCommand(t *testing.T) {
"tag:exists": {"user1"},
},
},
))
assertNoErr(t, err)
),
)
require.NoError(t, err)
headscale, err := scenario.Headscale()
assertNoErr(t, err)
require.NoError(t, err)
// Test list all nodes after added seconds
resultMachines := make([]*v1.Node, spec["user1"])
@ -882,7 +890,7 @@ func TestNodeAdvertiseTagWithACLCommand(t *testing.T) {
},
&resultMachines,
)
assert.Nil(t, err)
require.NoError(t, err)
found := false
for _, node := range resultMachines {
if node.GetValidTags() != nil {
@ -893,9 +901,8 @@ func TestNodeAdvertiseTagWithACLCommand(t *testing.T) {
}
}
}
assert.Equal(
assert.True(
t,
true,
found,
"should not find a node with the tag 'tag:exists' in the list of nodes",
)
@ -906,7 +913,7 @@ func TestNodeCommand(t *testing.T) {
t.Parallel()
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
@ -915,10 +922,10 @@ func TestNodeCommand(t *testing.T) {
}
err = scenario.CreateHeadscaleEnv(spec, []tsic.Option{}, hsic.WithTestName("clins"))
assertNoErr(t, err)
require.NoError(t, err)
headscale, err := scenario.Headscale()
assertNoErr(t, err)
require.NoError(t, err)
// Pregenerated machine keys
machineKeys := []string{
@ -929,7 +936,7 @@ func TestNodeCommand(t *testing.T) {
"mkey:cf7b0fd05da556fdc3bab365787b506fd82d64a70745db70e00e86c1b1c03084",
}
nodes := make([]*v1.Node, len(machineKeys))
assert.Nil(t, err)
require.NoError(t, err)
for index, machineKey := range machineKeys {
_, err := headscale.Execute(
@ -947,7 +954,7 @@ func TestNodeCommand(t *testing.T) {
"json",
},
)
assert.Nil(t, err)
require.NoError(t, err)
var node v1.Node
err = executeAndUnmarshal(
@ -965,7 +972,7 @@ func TestNodeCommand(t *testing.T) {
},
&node,
)
assert.Nil(t, err)
require.NoError(t, err)
nodes[index] = &node
}
@ -985,7 +992,7 @@ func TestNodeCommand(t *testing.T) {
},
&listAll,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, listAll, 5)
@ -1006,7 +1013,7 @@ func TestNodeCommand(t *testing.T) {
"mkey:dc721977ac7415aafa87f7d4574cbe07c6b171834a6d37375782bdc1fb6b3584",
}
otherUserMachines := make([]*v1.Node, len(otherUserMachineKeys))
assert.Nil(t, err)
require.NoError(t, err)
for index, machineKey := range otherUserMachineKeys {
_, err := headscale.Execute(
@ -1024,7 +1031,7 @@ func TestNodeCommand(t *testing.T) {
"json",
},
)
assert.Nil(t, err)
require.NoError(t, err)
var node v1.Node
err = executeAndUnmarshal(
@ -1042,7 +1049,7 @@ func TestNodeCommand(t *testing.T) {
},
&node,
)
assert.Nil(t, err)
require.NoError(t, err)
otherUserMachines[index] = &node
}
@ -1062,7 +1069,7 @@ func TestNodeCommand(t *testing.T) {
},
&listAllWithotherUser,
)
assert.Nil(t, err)
require.NoError(t, err)
// All nodes, nodes + otherUser
assert.Len(t, listAllWithotherUser, 7)
@ -1088,7 +1095,7 @@ func TestNodeCommand(t *testing.T) {
},
&listOnlyotherUserMachineUser,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, listOnlyotherUserMachineUser, 2)
@ -1120,7 +1127,7 @@ func TestNodeCommand(t *testing.T) {
"--force",
},
)
assert.Nil(t, err)
require.NoError(t, err)
// Test: list main user after node is deleted
var listOnlyMachineUserAfterDelete []v1.Node
@ -1137,7 +1144,7 @@ func TestNodeCommand(t *testing.T) {
},
&listOnlyMachineUserAfterDelete,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, listOnlyMachineUserAfterDelete, 4)
}
@ -1147,7 +1154,7 @@ func TestNodeExpireCommand(t *testing.T) {
t.Parallel()
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
@ -1155,10 +1162,10 @@ func TestNodeExpireCommand(t *testing.T) {
}
err = scenario.CreateHeadscaleEnv(spec, []tsic.Option{}, hsic.WithTestName("clins"))
assertNoErr(t, err)
require.NoError(t, err)
headscale, err := scenario.Headscale()
assertNoErr(t, err)
require.NoError(t, err)
// Pregenerated machine keys
machineKeys := []string{
@ -1186,7 +1193,7 @@ func TestNodeExpireCommand(t *testing.T) {
"json",
},
)
assert.Nil(t, err)
require.NoError(t, err)
var node v1.Node
err = executeAndUnmarshal(
@ -1204,7 +1211,7 @@ func TestNodeExpireCommand(t *testing.T) {
},
&node,
)
assert.Nil(t, err)
require.NoError(t, err)
nodes[index] = &node
}
@ -1223,7 +1230,7 @@ func TestNodeExpireCommand(t *testing.T) {
},
&listAll,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, listAll, 5)
@ -1243,7 +1250,7 @@ func TestNodeExpireCommand(t *testing.T) {
fmt.Sprintf("%d", listAll[idx].GetId()),
},
)
assert.Nil(t, err)
require.NoError(t, err)
}
var listAllAfterExpiry []v1.Node
@ -1258,7 +1265,7 @@ func TestNodeExpireCommand(t *testing.T) {
},
&listAllAfterExpiry,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, listAllAfterExpiry, 5)
@ -1274,7 +1281,7 @@ func TestNodeRenameCommand(t *testing.T) {
t.Parallel()
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
@ -1282,10 +1289,10 @@ func TestNodeRenameCommand(t *testing.T) {
}
err = scenario.CreateHeadscaleEnv(spec, []tsic.Option{}, hsic.WithTestName("clins"))
assertNoErr(t, err)
require.NoError(t, err)
headscale, err := scenario.Headscale()
assertNoErr(t, err)
require.NoError(t, err)
// Pregenerated machine keys
machineKeys := []string{
@ -1296,7 +1303,7 @@ func TestNodeRenameCommand(t *testing.T) {
"mkey:9b2ffa7e08cc421a3d2cca9012280f6a236fd0de0b4ce005b30a98ad930306fe",
}
nodes := make([]*v1.Node, len(machineKeys))
assert.Nil(t, err)
require.NoError(t, err)
for index, machineKey := range machineKeys {
_, err := headscale.Execute(
@ -1314,7 +1321,7 @@ func TestNodeRenameCommand(t *testing.T) {
"json",
},
)
assertNoErr(t, err)
require.NoError(t, err)
var node v1.Node
err = executeAndUnmarshal(
@ -1332,7 +1339,7 @@ func TestNodeRenameCommand(t *testing.T) {
},
&node,
)
assertNoErr(t, err)
require.NoError(t, err)
nodes[index] = &node
}
@ -1351,7 +1358,7 @@ func TestNodeRenameCommand(t *testing.T) {
},
&listAll,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, listAll, 5)
@ -1372,7 +1379,7 @@ func TestNodeRenameCommand(t *testing.T) {
fmt.Sprintf("newnode-%d", idx+1),
},
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Contains(t, res, "Node renamed")
}
@ -1389,7 +1396,7 @@ func TestNodeRenameCommand(t *testing.T) {
},
&listAllAfterRename,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, listAllAfterRename, 5)
@ -1410,7 +1417,7 @@ func TestNodeRenameCommand(t *testing.T) {
strings.Repeat("t", 64),
},
)
assert.ErrorContains(t, err, "not be over 63 chars")
require.ErrorContains(t, err, "not be over 63 chars")
var listAllAfterRenameAttempt []v1.Node
err = executeAndUnmarshal(
@ -1424,7 +1431,7 @@ func TestNodeRenameCommand(t *testing.T) {
},
&listAllAfterRenameAttempt,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, listAllAfterRenameAttempt, 5)
@ -1440,7 +1447,7 @@ func TestNodeMoveCommand(t *testing.T) {
t.Parallel()
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
@ -1449,10 +1456,10 @@ func TestNodeMoveCommand(t *testing.T) {
}
err = scenario.CreateHeadscaleEnv(spec, []tsic.Option{}, hsic.WithTestName("clins"))
assertNoErr(t, err)
require.NoError(t, err)
headscale, err := scenario.Headscale()
assertNoErr(t, err)
require.NoError(t, err)
// Randomly generated node key
machineKey := "mkey:688411b767663479632d44140f08a9fde87383adc7cdeb518f62ce28a17ef0aa"
@ -1472,7 +1479,7 @@ func TestNodeMoveCommand(t *testing.T) {
"json",
},
)
assert.Nil(t, err)
require.NoError(t, err)
var node v1.Node
err = executeAndUnmarshal(
@ -1490,11 +1497,11 @@ func TestNodeMoveCommand(t *testing.T) {
},
&node,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Equal(t, uint64(1), node.GetId())
assert.Equal(t, "nomad-node", node.GetName())
assert.Equal(t, node.GetUser().GetName(), "old-user")
assert.Equal(t, "old-user", node.GetUser().GetName())
nodeID := fmt.Sprintf("%d", node.GetId())
@ -1513,9 +1520,9 @@ func TestNodeMoveCommand(t *testing.T) {
},
&node,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Equal(t, node.GetUser().GetName(), "new-user")
assert.Equal(t, "new-user", node.GetUser().GetName())
var allNodes []v1.Node
err = executeAndUnmarshal(
@ -1529,13 +1536,13 @@ func TestNodeMoveCommand(t *testing.T) {
},
&allNodes,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, allNodes, 1)
assert.Equal(t, allNodes[0].GetId(), node.GetId())
assert.Equal(t, allNodes[0].GetUser(), node.GetUser())
assert.Equal(t, allNodes[0].GetUser().GetName(), "new-user")
assert.Equal(t, "new-user", allNodes[0].GetUser().GetName())
_, err = headscale.Execute(
[]string{
@ -1550,12 +1557,12 @@ func TestNodeMoveCommand(t *testing.T) {
"json",
},
)
assert.ErrorContains(
require.ErrorContains(
t,
err,
"user not found",
)
assert.Equal(t, node.GetUser().GetName(), "new-user")
assert.Equal(t, "new-user", node.GetUser().GetName())
err = executeAndUnmarshal(
headscale,
@ -1572,9 +1579,9 @@ func TestNodeMoveCommand(t *testing.T) {
},
&node,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Equal(t, node.GetUser().GetName(), "old-user")
assert.Equal(t, "old-user", node.GetUser().GetName())
err = executeAndUnmarshal(
headscale,
@ -1591,9 +1598,9 @@ func TestNodeMoveCommand(t *testing.T) {
},
&node,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Equal(t, node.GetUser().GetName(), "old-user")
assert.Equal(t, "old-user", node.GetUser().GetName())
}
func TestPolicyCommand(t *testing.T) {
@ -1601,7 +1608,7 @@ func TestPolicyCommand(t *testing.T) {
t.Parallel()
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
@ -1616,10 +1623,10 @@ func TestPolicyCommand(t *testing.T) {
"HEADSCALE_POLICY_MODE": "database",
}),
)
assertNoErr(t, err)
require.NoError(t, err)
headscale, err := scenario.Headscale()
assertNoErr(t, err)
require.NoError(t, err)
p := policy.ACLPolicy{
ACLs: []policy.ACL{
@ -1639,7 +1646,7 @@ func TestPolicyCommand(t *testing.T) {
policyFilePath := "/etc/headscale/policy.json"
err = headscale.WriteFile(policyFilePath, pBytes)
assertNoErr(t, err)
require.NoError(t, err)
// No policy is present at this time.
// Add a new policy from a file.
@ -1653,7 +1660,7 @@ func TestPolicyCommand(t *testing.T) {
},
)
assertNoErr(t, err)
require.NoError(t, err)
// Get the current policy and check
// if it is the same as the one we set.
@ -1669,11 +1676,11 @@ func TestPolicyCommand(t *testing.T) {
},
&output,
)
assertNoErr(t, err)
require.NoError(t, err)
assert.Len(t, output.TagOwners, 1)
assert.Len(t, output.ACLs, 1)
assert.Equal(t, output.TagOwners["tag:exists"], []string{"policy-user"})
assert.Equal(t, []string{"policy-user"}, output.TagOwners["tag:exists"])
}
func TestPolicyBrokenConfigCommand(t *testing.T) {
@ -1681,7 +1688,7 @@ func TestPolicyBrokenConfigCommand(t *testing.T) {
t.Parallel()
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
require.NoError(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
@ -1696,10 +1703,10 @@ func TestPolicyBrokenConfigCommand(t *testing.T) {
"HEADSCALE_POLICY_MODE": "database",
}),
)
assertNoErr(t, err)
require.NoError(t, err)
headscale, err := scenario.Headscale()
assertNoErr(t, err)
require.NoError(t, err)
p := policy.ACLPolicy{
ACLs: []policy.ACL{
@ -1721,7 +1728,7 @@ func TestPolicyBrokenConfigCommand(t *testing.T) {
policyFilePath := "/etc/headscale/policy.json"
err = headscale.WriteFile(policyFilePath, pBytes)
assertNoErr(t, err)
require.NoError(t, err)
// No policy is present at this time.
// Add a new policy from a file.
@ -1734,7 +1741,7 @@ func TestPolicyBrokenConfigCommand(t *testing.T) {
policyFilePath,
},
)
assert.ErrorContains(t, err, "verifying policy rules: invalid action")
require.ErrorContains(t, err, "verifying policy rules: invalid action")
// The new policy was invalid, the old one should still be in place, which
// is none.
@ -1747,5 +1754,5 @@ func TestPolicyBrokenConfigCommand(t *testing.T) {
"json",
},
)
assert.ErrorContains(t, err, "acl policy not found")
require.ErrorContains(t, err, "acl policy not found")
}

View file

@ -0,0 +1,96 @@
package integration
import (
"encoding/json"
"fmt"
"net"
"strconv"
"strings"
"testing"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/juanfont/headscale/integration/dsic"
"github.com/juanfont/headscale/integration/hsic"
"github.com/juanfont/headscale/integration/integrationutil"
"github.com/juanfont/headscale/integration/tsic"
"tailscale.com/tailcfg"
)
func TestDERPVerifyEndpoint(t *testing.T) {
IntegrationSkip(t)
// Generate random hostname for the headscale instance
hash, err := util.GenerateRandomStringDNSSafe(6)
assertNoErr(t, err)
testName := "derpverify"
hostname := fmt.Sprintf("hs-%s-%s", testName, hash)
headscalePort := 8080
// Create cert for headscale
certHeadscale, keyHeadscale, err := integrationutil.CreateCertificate(hostname)
assertNoErr(t, err)
scenario, err := NewScenario(dockertestMaxWait())
assertNoErr(t, err)
defer scenario.ShutdownAssertNoPanics(t)
spec := map[string]int{
"user1": len(MustTestVersions),
}
derper, err := scenario.CreateDERPServer("head",
dsic.WithCACert(certHeadscale),
dsic.WithVerifyClientURL(fmt.Sprintf("https://%s/verify", net.JoinHostPort(hostname, strconv.Itoa(headscalePort)))),
)
assertNoErr(t, err)
derpMap := tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
900: {
RegionID: 900,
RegionCode: "test-derpverify",
RegionName: "TestDerpVerify",
Nodes: []*tailcfg.DERPNode{
{
Name: "TestDerpVerify",
RegionID: 900,
HostName: derper.GetHostname(),
STUNPort: derper.GetSTUNPort(),
STUNOnly: false,
DERPPort: derper.GetDERPPort(),
},
},
},
},
}
err = scenario.CreateHeadscaleEnv(spec, []tsic.Option{tsic.WithCACert(derper.GetCert())},
hsic.WithHostname(hostname),
hsic.WithPort(headscalePort),
hsic.WithCustomTLS(certHeadscale, keyHeadscale),
hsic.WithHostnameAsServerURL(),
hsic.WithDERPConfig(derpMap))
assertNoErrHeadscaleEnv(t, err)
allClients, err := scenario.ListTailscaleClients()
assertNoErrListClients(t, err)
for _, client := range allClients {
report, err := client.DebugDERPRegion("test-derpverify")
assertNoErr(t, err)
successful := false
for _, line := range report.Info {
if strings.Contains(line, "Successfully established a DERP connection with node") {
successful = true
break
}
}
if !successful {
stJSON, err := json.Marshal(report)
assertNoErr(t, err)
t.Errorf("Client %s could not establish a DERP connection: %s", client.Hostname(), string(stJSON))
}
}
}

321
integration/dsic/dsic.go Normal file
View file

@ -0,0 +1,321 @@
package dsic
import (
"crypto/tls"
"errors"
"fmt"
"log"
"net"
"net/http"
"strconv"
"strings"
"time"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/juanfont/headscale/integration/dockertestutil"
"github.com/juanfont/headscale/integration/integrationutil"
"github.com/ory/dockertest/v3"
"github.com/ory/dockertest/v3/docker"
)
const (
dsicHashLength = 6
dockerContextPath = "../."
caCertRoot = "/usr/local/share/ca-certificates"
DERPerCertRoot = "/usr/local/share/derper-certs"
dockerExecuteTimeout = 60 * time.Second
)
var errDERPerStatusCodeNotOk = errors.New("DERPer status code not OK")
// DERPServerInContainer represents DERP Server in Container (DSIC).
type DERPServerInContainer struct {
version string
hostname string
pool *dockertest.Pool
container *dockertest.Resource
network *dockertest.Network
stunPort int
derpPort int
caCerts [][]byte
tlsCert []byte
tlsKey []byte
withExtraHosts []string
withVerifyClientURL string
workdir string
}
// Option represent optional settings that can be given to a
// DERPer instance.
type Option = func(c *DERPServerInContainer)
// WithCACert adds it to the trusted surtificate of the Tailscale container.
func WithCACert(cert []byte) Option {
return func(dsic *DERPServerInContainer) {
dsic.caCerts = append(dsic.caCerts, cert)
}
}
// WithOrCreateNetwork sets the Docker container network to use with
// the DERPer instance, if the parameter is nil, a new network,
// isolating the DERPer, will be created. If a network is
// passed, the DERPer instance will join the given network.
func WithOrCreateNetwork(network *dockertest.Network) Option {
return func(tsic *DERPServerInContainer) {
if network != nil {
tsic.network = network
return
}
network, err := dockertestutil.GetFirstOrCreateNetwork(
tsic.pool,
tsic.hostname+"-network",
)
if err != nil {
log.Fatalf("failed to create network: %s", err)
}
tsic.network = network
}
}
// WithDockerWorkdir allows the docker working directory to be set.
func WithDockerWorkdir(dir string) Option {
return func(tsic *DERPServerInContainer) {
tsic.workdir = dir
}
}
// WithVerifyClientURL sets the URL to verify the client.
func WithVerifyClientURL(url string) Option {
return func(tsic *DERPServerInContainer) {
tsic.withVerifyClientURL = url
}
}
// WithExtraHosts adds extra hosts to the container.
func WithExtraHosts(hosts []string) Option {
return func(tsic *DERPServerInContainer) {
tsic.withExtraHosts = hosts
}
}
// New returns a new TailscaleInContainer instance.
func New(
pool *dockertest.Pool,
version string,
network *dockertest.Network,
opts ...Option,
) (*DERPServerInContainer, error) {
hash, err := util.GenerateRandomStringDNSSafe(dsicHashLength)
if err != nil {
return nil, err
}
hostname := fmt.Sprintf("derp-%s-%s", strings.ReplaceAll(version, ".", "-"), hash)
tlsCert, tlsKey, err := integrationutil.CreateCertificate(hostname)
if err != nil {
return nil, fmt.Errorf("failed to create certificates for headscale test: %w", err)
}
dsic := &DERPServerInContainer{
version: version,
hostname: hostname,
pool: pool,
network: network,
tlsCert: tlsCert,
tlsKey: tlsKey,
stunPort: 3478, //nolint
derpPort: 443, //nolint
}
for _, opt := range opts {
opt(dsic)
}
var cmdArgs strings.Builder
fmt.Fprintf(&cmdArgs, "--hostname=%s", hostname)
fmt.Fprintf(&cmdArgs, " --certmode=manual")
fmt.Fprintf(&cmdArgs, " --certdir=%s", DERPerCertRoot)
fmt.Fprintf(&cmdArgs, " --a=:%d", dsic.derpPort)
fmt.Fprintf(&cmdArgs, " --stun=true")
fmt.Fprintf(&cmdArgs, " --stun-port=%d", dsic.stunPort)
if dsic.withVerifyClientURL != "" {
fmt.Fprintf(&cmdArgs, " --verify-client-url=%s", dsic.withVerifyClientURL)
}
runOptions := &dockertest.RunOptions{
Name: hostname,
Networks: []*dockertest.Network{dsic.network},
ExtraHosts: dsic.withExtraHosts,
// we currently need to give us some time to inject the certificate further down.
Entrypoint: []string{"/bin/sh", "-c", "/bin/sleep 3 ; update-ca-certificates ; derper " + cmdArgs.String()},
ExposedPorts: []string{
"80/tcp",
fmt.Sprintf("%d/tcp", dsic.derpPort),
fmt.Sprintf("%d/udp", dsic.stunPort),
},
}
if dsic.workdir != "" {
runOptions.WorkingDir = dsic.workdir
}
// dockertest isnt very good at handling containers that has already
// been created, this is an attempt to make sure this container isnt
// present.
err = pool.RemoveContainerByName(hostname)
if err != nil {
return nil, err
}
var container *dockertest.Resource
buildOptions := &dockertest.BuildOptions{
Dockerfile: "Dockerfile.derper",
ContextDir: dockerContextPath,
BuildArgs: []docker.BuildArg{},
}
switch version {
case "head":
buildOptions.BuildArgs = append(buildOptions.BuildArgs, docker.BuildArg{
Name: "VERSION_BRANCH",
Value: "main",
})
default:
buildOptions.BuildArgs = append(buildOptions.BuildArgs, docker.BuildArg{
Name: "VERSION_BRANCH",
Value: "v" + version,
})
}
container, err = pool.BuildAndRunWithBuildOptions(
buildOptions,
runOptions,
dockertestutil.DockerRestartPolicy,
dockertestutil.DockerAllowLocalIPv6,
dockertestutil.DockerAllowNetworkAdministration,
)
if err != nil {
return nil, fmt.Errorf(
"%s could not start tailscale DERPer container (version: %s): %w",
hostname,
version,
err,
)
}
log.Printf("Created %s container\n", hostname)
dsic.container = container
for i, cert := range dsic.caCerts {
err = dsic.WriteFile(fmt.Sprintf("%s/user-%d.crt", caCertRoot, i), cert)
if err != nil {
return nil, fmt.Errorf("failed to write TLS certificate to container: %w", err)
}
}
if len(dsic.tlsCert) != 0 {
err = dsic.WriteFile(fmt.Sprintf("%s/%s.crt", DERPerCertRoot, dsic.hostname), dsic.tlsCert)
if err != nil {
return nil, fmt.Errorf("failed to write TLS certificate to container: %w", err)
}
}
if len(dsic.tlsKey) != 0 {
err = dsic.WriteFile(fmt.Sprintf("%s/%s.key", DERPerCertRoot, dsic.hostname), dsic.tlsKey)
if err != nil {
return nil, fmt.Errorf("failed to write TLS key to container: %w", err)
}
}
return dsic, nil
}
// Shutdown stops and cleans up the DERPer container.
func (t *DERPServerInContainer) Shutdown() error {
err := t.SaveLog("/tmp/control")
if err != nil {
log.Printf(
"Failed to save log from %s: %s",
t.hostname,
fmt.Errorf("failed to save log: %w", err),
)
}
return t.pool.Purge(t.container)
}
// GetCert returns the TLS certificate of the DERPer instance.
func (t *DERPServerInContainer) GetCert() []byte {
return t.tlsCert
}
// Hostname returns the hostname of the DERPer instance.
func (t *DERPServerInContainer) Hostname() string {
return t.hostname
}
// Version returns the running DERPer version of the instance.
func (t *DERPServerInContainer) Version() string {
return t.version
}
// ID returns the Docker container ID of the DERPServerInContainer
// instance.
func (t *DERPServerInContainer) ID() string {
return t.container.Container.ID
}
func (t *DERPServerInContainer) GetHostname() string {
return t.hostname
}
// GetSTUNPort returns the STUN port of the DERPer instance.
func (t *DERPServerInContainer) GetSTUNPort() int {
return t.stunPort
}
// GetDERPPort returns the DERP port of the DERPer instance.
func (t *DERPServerInContainer) GetDERPPort() int {
return t.derpPort
}
// WaitForRunning blocks until the DERPer instance is ready to be used.
func (t *DERPServerInContainer) WaitForRunning() error {
url := "https://" + net.JoinHostPort(t.GetHostname(), strconv.Itoa(t.GetDERPPort())) + "/"
log.Printf("waiting for DERPer to be ready at %s", url)
insecureTransport := http.DefaultTransport.(*http.Transport).Clone() //nolint
insecureTransport.TLSClientConfig = &tls.Config{InsecureSkipVerify: true} //nolint
client := &http.Client{Transport: insecureTransport}
return t.pool.Retry(func() error {
resp, err := client.Get(url) //nolint
if err != nil {
return fmt.Errorf("headscale is not ready: %w", err)
}
if resp.StatusCode != http.StatusOK {
return errDERPerStatusCodeNotOk
}
return nil
})
}
// ConnectToNetwork connects the DERPer instance to a network.
func (t *DERPServerInContainer) ConnectToNetwork(network *dockertest.Network) error {
return t.container.ConnectToNetwork(network)
}
// WriteFile save file inside the container.
func (t *DERPServerInContainer) WriteFile(path string, data []byte) error {
return integrationutil.WriteFileToContainer(t.pool, t.container, path, data)
}
// SaveLog saves the current stdout log of the container to a path
// on the host system.
func (t *DERPServerInContainer) SaveLog(path string) error {
_, _, err := dockertestutil.SaveLog(t.pool, t.container, path)
return err
}

View file

@ -310,7 +310,7 @@ func (s *EmbeddedDERPServerScenario) CreateTailscaleIsolatedNodesInUser(
cert := hsServer.GetCert()
opts = append(opts,
tsic.WithHeadscaleTLS(cert),
tsic.WithCACert(cert),
)
user.createWaitGroup.Go(func() error {

View file

@ -18,6 +18,7 @@ import (
"github.com/rs/zerolog/log"
"github.com/samber/lo"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/sync/errgroup"
"tailscale.com/client/tailscale/apitype"
"tailscale.com/types/key"
@ -244,7 +245,11 @@ func TestEphemeral(t *testing.T) {
}
func TestEphemeralInAlternateTimezone(t *testing.T) {
testEphemeralWithOptions(t, hsic.WithTestName("ephemeral-tz"), hsic.WithTimezone("America/Los_Angeles"))
testEphemeralWithOptions(
t,
hsic.WithTestName("ephemeral-tz"),
hsic.WithTimezone("America/Los_Angeles"),
)
}
func testEphemeralWithOptions(t *testing.T, opts ...hsic.Option) {
@ -1164,10 +1169,10 @@ func Test2118DeletingOnlineNodePanics(t *testing.T) {
},
&nodeList,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, nodeList, 2)
assert.True(t, nodeList[0].Online)
assert.True(t, nodeList[1].Online)
assert.True(t, nodeList[0].GetOnline())
assert.True(t, nodeList[1].GetOnline())
// Delete the first node, which is online
_, err = headscale.Execute(
@ -1177,13 +1182,13 @@ func Test2118DeletingOnlineNodePanics(t *testing.T) {
"delete",
"--identifier",
// Delete the last added machine
fmt.Sprintf("%d", nodeList[0].Id),
fmt.Sprintf("%d", nodeList[0].GetId()),
"--output",
"json",
"--force",
},
)
assert.Nil(t, err)
require.NoError(t, err)
time.Sleep(2 * time.Second)
@ -1200,9 +1205,8 @@ func Test2118DeletingOnlineNodePanics(t *testing.T) {
},
&nodeListAfter,
)
assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, nodeListAfter, 1)
assert.True(t, nodeListAfter[0].Online)
assert.Equal(t, nodeList[1].Id, nodeListAfter[0].Id)
assert.True(t, nodeListAfter[0].GetOnline())
assert.Equal(t, nodeList[1].GetId(), nodeListAfter[0].GetId())
}

View file

@ -1,19 +1,12 @@
package hsic
import (
"bytes"
"crypto/rand"
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"encoding/json"
"encoding/pem"
"errors"
"fmt"
"io"
"log"
"math/big"
"net"
"net/http"
"net/url"
@ -32,11 +25,14 @@ import (
"github.com/juanfont/headscale/integration/integrationutil"
"github.com/ory/dockertest/v3"
"github.com/ory/dockertest/v3/docker"
"gopkg.in/yaml.v3"
"tailscale.com/tailcfg"
)
const (
hsicHashLength = 6
dockerContextPath = "../."
caCertRoot = "/usr/local/share/ca-certificates"
aclPolicyPath = "/etc/headscale/acl.hujson"
tlsCertPath = "/etc/headscale/tls.cert"
tlsKeyPath = "/etc/headscale/tls.key"
@ -64,6 +60,7 @@ type HeadscaleInContainer struct {
// optional config
port int
extraPorts []string
caCerts [][]byte
hostPortBindings map[string][]string
aclPolicy *policy.ACLPolicy
env map[string]string
@ -88,18 +85,29 @@ func WithACLPolicy(acl *policy.ACLPolicy) Option {
}
}
// WithCACert adds it to the trusted surtificate of the container.
func WithCACert(cert []byte) Option {
return func(hsic *HeadscaleInContainer) {
hsic.caCerts = append(hsic.caCerts, cert)
}
}
// WithTLS creates certificates and enables HTTPS.
func WithTLS() Option {
return func(hsic *HeadscaleInContainer) {
cert, key, err := createCertificate(hsic.hostname)
cert, key, err := integrationutil.CreateCertificate(hsic.hostname)
if err != nil {
log.Fatalf("failed to create certificates for headscale test: %s", err)
}
// TODO(kradalby): Move somewhere appropriate
hsic.env["HEADSCALE_TLS_CERT_PATH"] = tlsCertPath
hsic.env["HEADSCALE_TLS_KEY_PATH"] = tlsKeyPath
hsic.tlsCert = cert
hsic.tlsKey = key
}
}
// WithCustomTLS uses the given certificates for the Headscale instance.
func WithCustomTLS(cert, key []byte) Option {
return func(hsic *HeadscaleInContainer) {
hsic.tlsCert = cert
hsic.tlsKey = key
}
@ -146,6 +154,13 @@ func WithTestName(testName string) Option {
}
}
// WithHostname sets the hostname of the Headscale instance.
func WithHostname(hostname string) Option {
return func(hsic *HeadscaleInContainer) {
hsic.hostname = hostname
}
}
// WithHostnameAsServerURL sets the Headscale ServerURL based on
// the Hostname.
func WithHostnameAsServerURL() Option {
@ -203,6 +218,34 @@ func WithEmbeddedDERPServerOnly() Option {
}
}
// WithDERPConfig configures Headscale use a custom
// DERP server only.
func WithDERPConfig(derpMap tailcfg.DERPMap) Option {
return func(hsic *HeadscaleInContainer) {
contents, err := yaml.Marshal(derpMap)
if err != nil {
log.Fatalf("failed to marshal DERP map: %s", err)
return
}
hsic.env["HEADSCALE_DERP_PATHS"] = "/etc/headscale/derp.yml"
hsic.filesInContainer = append(hsic.filesInContainer,
fileInContainer{
path: "/etc/headscale/derp.yml",
contents: contents,
})
// Disable global DERP server and embedded DERP server
hsic.env["HEADSCALE_DERP_URLS"] = ""
hsic.env["HEADSCALE_DERP_SERVER_ENABLED"] = "false"
// Envknob for enabling DERP debug logs
hsic.env["DERP_DEBUG_LOGS"] = "true"
hsic.env["DERP_PROBER_DEBUG_LOGS"] = "true"
}
}
// WithTuning allows changing the tuning settings easily.
func WithTuning(batchTimeout time.Duration, mapSessionChanSize int) Option {
return func(hsic *HeadscaleInContainer) {
@ -300,6 +343,10 @@ func New(
"HEADSCALE_DEBUG_HIGH_CARDINALITY_METRICS=1",
"HEADSCALE_DEBUG_DUMP_CONFIG=1",
}
if hsic.hasTLS() {
hsic.env["HEADSCALE_TLS_CERT_PATH"] = tlsCertPath
hsic.env["HEADSCALE_TLS_KEY_PATH"] = tlsKeyPath
}
for key, value := range hsic.env {
env = append(env, fmt.Sprintf("%s=%s", key, value))
}
@ -313,7 +360,7 @@ func New(
// Cmd: []string{"headscale", "serve"},
// TODO(kradalby): Get rid of this hack, we currently need to give us some
// to inject the headscale configuration further down.
Entrypoint: []string{"/bin/bash", "-c", "/bin/sleep 3 ; headscale serve ; /bin/sleep 30"},
Entrypoint: []string{"/bin/bash", "-c", "/bin/sleep 3 ; update-ca-certificates ; headscale serve ; /bin/sleep 30"},
Env: env,
}
@ -351,6 +398,14 @@ func New(
hsic.container = container
// Write the CA certificates to the container
for i, cert := range hsic.caCerts {
err = hsic.WriteFile(fmt.Sprintf("%s/user-%d.crt", caCertRoot, i), cert)
if err != nil {
return nil, fmt.Errorf("failed to write TLS certificate to container: %w", err)
}
}
err = hsic.WriteFile("/etc/headscale/config.yaml", []byte(MinimumConfigYAML()))
if err != nil {
return nil, fmt.Errorf("failed to write headscale config to container: %w", err)
@ -749,86 +804,3 @@ func (t *HeadscaleInContainer) SendInterrupt() error {
return nil
}
// nolint
func createCertificate(hostname string) ([]byte, []byte, error) {
// From:
// https://shaneutt.com/blog/golang-ca-and-signed-cert-go/
ca := &x509.Certificate{
SerialNumber: big.NewInt(2019),
Subject: pkix.Name{
Organization: []string{"Headscale testing INC"},
Country: []string{"NL"},
Locality: []string{"Leiden"},
},
NotBefore: time.Now(),
NotAfter: time.Now().Add(60 * time.Hour),
IsCA: true,
ExtKeyUsage: []x509.ExtKeyUsage{
x509.ExtKeyUsageClientAuth,
x509.ExtKeyUsageServerAuth,
},
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
BasicConstraintsValid: true,
}
caPrivKey, err := rsa.GenerateKey(rand.Reader, 4096)
if err != nil {
return nil, nil, err
}
cert := &x509.Certificate{
SerialNumber: big.NewInt(1658),
Subject: pkix.Name{
CommonName: hostname,
Organization: []string{"Headscale testing INC"},
Country: []string{"NL"},
Locality: []string{"Leiden"},
},
NotBefore: time.Now(),
NotAfter: time.Now().Add(60 * time.Minute),
SubjectKeyId: []byte{1, 2, 3, 4, 6},
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth, x509.ExtKeyUsageServerAuth},
KeyUsage: x509.KeyUsageDigitalSignature,
DNSNames: []string{hostname},
}
certPrivKey, err := rsa.GenerateKey(rand.Reader, 4096)
if err != nil {
return nil, nil, err
}
certBytes, err := x509.CreateCertificate(
rand.Reader,
cert,
ca,
&certPrivKey.PublicKey,
caPrivKey,
)
if err != nil {
return nil, nil, err
}
certPEM := new(bytes.Buffer)
err = pem.Encode(certPEM, &pem.Block{
Type: "CERTIFICATE",
Bytes: certBytes,
})
if err != nil {
return nil, nil, err
}
certPrivKeyPEM := new(bytes.Buffer)
err = pem.Encode(certPrivKeyPEM, &pem.Block{
Type: "RSA PRIVATE KEY",
Bytes: x509.MarshalPKCS1PrivateKey(certPrivKey),
})
if err != nil {
return nil, nil, err
}
return certPEM.Bytes(), certPrivKeyPEM.Bytes(), nil
}

View file

@ -3,9 +3,16 @@ package integrationutil
import (
"archive/tar"
"bytes"
"crypto/rand"
"crypto/rsa"
"crypto/x509"
"crypto/x509/pkix"
"encoding/pem"
"fmt"
"io"
"math/big"
"path/filepath"
"time"
"github.com/juanfont/headscale/integration/dockertestutil"
"github.com/ory/dockertest/v3"
@ -93,3 +100,86 @@ func FetchPathFromContainer(
return buf.Bytes(), nil
}
// nolint
func CreateCertificate(hostname string) ([]byte, []byte, error) {
// From:
// https://shaneutt.com/blog/golang-ca-and-signed-cert-go/
ca := &x509.Certificate{
SerialNumber: big.NewInt(2019),
Subject: pkix.Name{
Organization: []string{"Headscale testing INC"},
Country: []string{"NL"},
Locality: []string{"Leiden"},
},
NotBefore: time.Now(),
NotAfter: time.Now().Add(60 * time.Hour),
IsCA: true,
ExtKeyUsage: []x509.ExtKeyUsage{
x509.ExtKeyUsageClientAuth,
x509.ExtKeyUsageServerAuth,
},
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
BasicConstraintsValid: true,
}
caPrivKey, err := rsa.GenerateKey(rand.Reader, 4096)
if err != nil {
return nil, nil, err
}
cert := &x509.Certificate{
SerialNumber: big.NewInt(1658),
Subject: pkix.Name{
CommonName: hostname,
Organization: []string{"Headscale testing INC"},
Country: []string{"NL"},
Locality: []string{"Leiden"},
},
NotBefore: time.Now(),
NotAfter: time.Now().Add(60 * time.Minute),
SubjectKeyId: []byte{1, 2, 3, 4, 6},
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth, x509.ExtKeyUsageServerAuth},
KeyUsage: x509.KeyUsageDigitalSignature,
DNSNames: []string{hostname},
}
certPrivKey, err := rsa.GenerateKey(rand.Reader, 4096)
if err != nil {
return nil, nil, err
}
certBytes, err := x509.CreateCertificate(
rand.Reader,
cert,
ca,
&certPrivKey.PublicKey,
caPrivKey,
)
if err != nil {
return nil, nil, err
}
certPEM := new(bytes.Buffer)
err = pem.Encode(certPEM, &pem.Block{
Type: "CERTIFICATE",
Bytes: certBytes,
})
if err != nil {
return nil, nil, err
}
certPrivKeyPEM := new(bytes.Buffer)
err = pem.Encode(certPrivKeyPEM, &pem.Block{
Type: "RSA PRIVATE KEY",
Bytes: x509.MarshalPKCS1PrivateKey(certPrivKey),
})
if err != nil {
return nil, nil, err
}
return certPEM.Bytes(), certPrivKeyPEM.Bytes(), nil
}

View file

@ -92,9 +92,9 @@ func TestEnablingRoutes(t *testing.T) {
assert.Len(t, routes, 3)
for _, route := range routes {
assert.Equal(t, true, route.GetAdvertised())
assert.Equal(t, false, route.GetEnabled())
assert.Equal(t, false, route.GetIsPrimary())
assert.True(t, route.GetAdvertised())
assert.False(t, route.GetEnabled())
assert.False(t, route.GetIsPrimary())
}
// Verify that no routes has been sent to the client,
@ -139,9 +139,9 @@ func TestEnablingRoutes(t *testing.T) {
assert.Len(t, enablingRoutes, 3)
for _, route := range enablingRoutes {
assert.Equal(t, true, route.GetAdvertised())
assert.Equal(t, true, route.GetEnabled())
assert.Equal(t, true, route.GetIsPrimary())
assert.True(t, route.GetAdvertised())
assert.True(t, route.GetEnabled())
assert.True(t, route.GetIsPrimary())
}
time.Sleep(5 * time.Second)
@ -212,18 +212,18 @@ func TestEnablingRoutes(t *testing.T) {
assertNoErr(t, err)
for _, route := range disablingRoutes {
assert.Equal(t, true, route.GetAdvertised())
assert.True(t, route.GetAdvertised())
if route.GetId() == routeToBeDisabled.GetId() {
assert.Equal(t, false, route.GetEnabled())
assert.False(t, route.GetEnabled())
// since this is the only route of this cidr,
// it will not failover, and remain Primary
// until something can replace it.
assert.Equal(t, true, route.GetIsPrimary())
assert.True(t, route.GetIsPrimary())
} else {
assert.Equal(t, true, route.GetEnabled())
assert.Equal(t, true, route.GetIsPrimary())
assert.True(t, route.GetEnabled())
assert.True(t, route.GetIsPrimary())
}
}
@ -342,9 +342,9 @@ func TestHASubnetRouterFailover(t *testing.T) {
t.Logf("initial routes %#v", routes)
for _, route := range routes {
assert.Equal(t, true, route.GetAdvertised())
assert.Equal(t, false, route.GetEnabled())
assert.Equal(t, false, route.GetIsPrimary())
assert.True(t, route.GetAdvertised())
assert.False(t, route.GetEnabled())
assert.False(t, route.GetIsPrimary())
}
// Verify that no routes has been sent to the client,
@ -391,14 +391,14 @@ func TestHASubnetRouterFailover(t *testing.T) {
assert.Len(t, enablingRoutes, 2)
// Node 1 is primary
assert.Equal(t, true, enablingRoutes[0].GetAdvertised())
assert.Equal(t, true, enablingRoutes[0].GetEnabled())
assert.Equal(t, true, enablingRoutes[0].GetIsPrimary(), "both subnet routers are up, expected r1 to be primary")
assert.True(t, enablingRoutes[0].GetAdvertised())
assert.True(t, enablingRoutes[0].GetEnabled())
assert.True(t, enablingRoutes[0].GetIsPrimary(), "both subnet routers are up, expected r1 to be primary")
// Node 2 is not primary
assert.Equal(t, true, enablingRoutes[1].GetAdvertised())
assert.Equal(t, true, enablingRoutes[1].GetEnabled())
assert.Equal(t, false, enablingRoutes[1].GetIsPrimary(), "both subnet routers are up, expected r2 to be non-primary")
assert.True(t, enablingRoutes[1].GetAdvertised())
assert.True(t, enablingRoutes[1].GetEnabled())
assert.False(t, enablingRoutes[1].GetIsPrimary(), "both subnet routers are up, expected r2 to be non-primary")
// Verify that the client has routes from the primary machine
srs1, err := subRouter1.Status()
@ -446,14 +446,14 @@ func TestHASubnetRouterFailover(t *testing.T) {
assert.Len(t, routesAfterMove, 2)
// Node 1 is not primary
assert.Equal(t, true, routesAfterMove[0].GetAdvertised())
assert.Equal(t, true, routesAfterMove[0].GetEnabled())
assert.Equal(t, false, routesAfterMove[0].GetIsPrimary(), "r1 is down, expected r2 to be primary")
assert.True(t, routesAfterMove[0].GetAdvertised())
assert.True(t, routesAfterMove[0].GetEnabled())
assert.False(t, routesAfterMove[0].GetIsPrimary(), "r1 is down, expected r2 to be primary")
// Node 2 is primary
assert.Equal(t, true, routesAfterMove[1].GetAdvertised())
assert.Equal(t, true, routesAfterMove[1].GetEnabled())
assert.Equal(t, true, routesAfterMove[1].GetIsPrimary(), "r1 is down, expected r2 to be primary")
assert.True(t, routesAfterMove[1].GetAdvertised())
assert.True(t, routesAfterMove[1].GetEnabled())
assert.True(t, routesAfterMove[1].GetIsPrimary(), "r1 is down, expected r2 to be primary")
srs2, err = subRouter2.Status()
@ -501,16 +501,16 @@ func TestHASubnetRouterFailover(t *testing.T) {
assert.Len(t, routesAfterBothDown, 2)
// Node 1 is not primary
assert.Equal(t, true, routesAfterBothDown[0].GetAdvertised())
assert.Equal(t, true, routesAfterBothDown[0].GetEnabled())
assert.Equal(t, false, routesAfterBothDown[0].GetIsPrimary(), "r1 and r2 is down, expected r2 to _still_ be primary")
assert.True(t, routesAfterBothDown[0].GetAdvertised())
assert.True(t, routesAfterBothDown[0].GetEnabled())
assert.False(t, routesAfterBothDown[0].GetIsPrimary(), "r1 and r2 is down, expected r2 to _still_ be primary")
// Node 2 is primary
// if the node goes down, but no other suitable route is
// available, keep the last known good route.
assert.Equal(t, true, routesAfterBothDown[1].GetAdvertised())
assert.Equal(t, true, routesAfterBothDown[1].GetEnabled())
assert.Equal(t, true, routesAfterBothDown[1].GetIsPrimary(), "r1 and r2 is down, expected r2 to _still_ be primary")
assert.True(t, routesAfterBothDown[1].GetAdvertised())
assert.True(t, routesAfterBothDown[1].GetEnabled())
assert.True(t, routesAfterBothDown[1].GetIsPrimary(), "r1 and r2 is down, expected r2 to _still_ be primary")
// TODO(kradalby): Check client status
// Both are expected to be down
@ -560,14 +560,14 @@ func TestHASubnetRouterFailover(t *testing.T) {
assert.Len(t, routesAfter1Up, 2)
// Node 1 is primary
assert.Equal(t, true, routesAfter1Up[0].GetAdvertised())
assert.Equal(t, true, routesAfter1Up[0].GetEnabled())
assert.Equal(t, true, routesAfter1Up[0].GetIsPrimary(), "r1 is back up, expected r1 to become be primary")
assert.True(t, routesAfter1Up[0].GetAdvertised())
assert.True(t, routesAfter1Up[0].GetEnabled())
assert.True(t, routesAfter1Up[0].GetIsPrimary(), "r1 is back up, expected r1 to become be primary")
// Node 2 is not primary
assert.Equal(t, true, routesAfter1Up[1].GetAdvertised())
assert.Equal(t, true, routesAfter1Up[1].GetEnabled())
assert.Equal(t, false, routesAfter1Up[1].GetIsPrimary(), "r1 is back up, expected r1 to become be primary")
assert.True(t, routesAfter1Up[1].GetAdvertised())
assert.True(t, routesAfter1Up[1].GetEnabled())
assert.False(t, routesAfter1Up[1].GetIsPrimary(), "r1 is back up, expected r1 to become be primary")
// Verify that the route is announced from subnet router 1
clientStatus, err = client.Status()
@ -614,14 +614,14 @@ func TestHASubnetRouterFailover(t *testing.T) {
assert.Len(t, routesAfter2Up, 2)
// Node 1 is not primary
assert.Equal(t, true, routesAfter2Up[0].GetAdvertised())
assert.Equal(t, true, routesAfter2Up[0].GetEnabled())
assert.Equal(t, true, routesAfter2Up[0].GetIsPrimary(), "r1 and r2 is back up, expected r1 to _still_ be primary")
assert.True(t, routesAfter2Up[0].GetAdvertised())
assert.True(t, routesAfter2Up[0].GetEnabled())
assert.True(t, routesAfter2Up[0].GetIsPrimary(), "r1 and r2 is back up, expected r1 to _still_ be primary")
// Node 2 is primary
assert.Equal(t, true, routesAfter2Up[1].GetAdvertised())
assert.Equal(t, true, routesAfter2Up[1].GetEnabled())
assert.Equal(t, false, routesAfter2Up[1].GetIsPrimary(), "r1 and r2 is back up, expected r1 to _still_ be primary")
assert.True(t, routesAfter2Up[1].GetAdvertised())
assert.True(t, routesAfter2Up[1].GetEnabled())
assert.False(t, routesAfter2Up[1].GetIsPrimary(), "r1 and r2 is back up, expected r1 to _still_ be primary")
// Verify that the route is announced from subnet router 1
clientStatus, err = client.Status()
@ -677,14 +677,14 @@ func TestHASubnetRouterFailover(t *testing.T) {
t.Logf("routes after disabling r1 %#v", routesAfterDisabling1)
// Node 1 is not primary
assert.Equal(t, true, routesAfterDisabling1[0].GetAdvertised())
assert.Equal(t, false, routesAfterDisabling1[0].GetEnabled())
assert.Equal(t, false, routesAfterDisabling1[0].GetIsPrimary())
assert.True(t, routesAfterDisabling1[0].GetAdvertised())
assert.False(t, routesAfterDisabling1[0].GetEnabled())
assert.False(t, routesAfterDisabling1[0].GetIsPrimary())
// Node 2 is primary
assert.Equal(t, true, routesAfterDisabling1[1].GetAdvertised())
assert.Equal(t, true, routesAfterDisabling1[1].GetEnabled())
assert.Equal(t, true, routesAfterDisabling1[1].GetIsPrimary())
assert.True(t, routesAfterDisabling1[1].GetAdvertised())
assert.True(t, routesAfterDisabling1[1].GetEnabled())
assert.True(t, routesAfterDisabling1[1].GetIsPrimary())
// Verify that the route is announced from subnet router 1
clientStatus, err = client.Status()
@ -735,14 +735,14 @@ func TestHASubnetRouterFailover(t *testing.T) {
assert.Len(t, routesAfterEnabling1, 2)
// Node 1 is not primary
assert.Equal(t, true, routesAfterEnabling1[0].GetAdvertised())
assert.Equal(t, true, routesAfterEnabling1[0].GetEnabled())
assert.Equal(t, false, routesAfterEnabling1[0].GetIsPrimary())
assert.True(t, routesAfterEnabling1[0].GetAdvertised())
assert.True(t, routesAfterEnabling1[0].GetEnabled())
assert.False(t, routesAfterEnabling1[0].GetIsPrimary())
// Node 2 is primary
assert.Equal(t, true, routesAfterEnabling1[1].GetAdvertised())
assert.Equal(t, true, routesAfterEnabling1[1].GetEnabled())
assert.Equal(t, true, routesAfterEnabling1[1].GetIsPrimary())
assert.True(t, routesAfterEnabling1[1].GetAdvertised())
assert.True(t, routesAfterEnabling1[1].GetEnabled())
assert.True(t, routesAfterEnabling1[1].GetIsPrimary())
// Verify that the route is announced from subnet router 1
clientStatus, err = client.Status()
@ -795,9 +795,9 @@ func TestHASubnetRouterFailover(t *testing.T) {
t.Logf("routes after deleting r2 %#v", routesAfterDeleting2)
// Node 1 is primary
assert.Equal(t, true, routesAfterDeleting2[0].GetAdvertised())
assert.Equal(t, true, routesAfterDeleting2[0].GetEnabled())
assert.Equal(t, true, routesAfterDeleting2[0].GetIsPrimary())
assert.True(t, routesAfterDeleting2[0].GetAdvertised())
assert.True(t, routesAfterDeleting2[0].GetEnabled())
assert.True(t, routesAfterDeleting2[0].GetIsPrimary())
// Verify that the route is announced from subnet router 1
clientStatus, err = client.Status()
@ -893,9 +893,9 @@ func TestEnableDisableAutoApprovedRoute(t *testing.T) {
assert.Len(t, routes, 1)
// All routes should be auto approved and enabled
assert.Equal(t, true, routes[0].GetAdvertised())
assert.Equal(t, true, routes[0].GetEnabled())
assert.Equal(t, true, routes[0].GetIsPrimary())
assert.True(t, routes[0].GetAdvertised())
assert.True(t, routes[0].GetEnabled())
assert.True(t, routes[0].GetIsPrimary())
// Stop advertising route
command = []string{
@ -924,9 +924,9 @@ func TestEnableDisableAutoApprovedRoute(t *testing.T) {
assert.Len(t, notAdvertisedRoutes, 1)
// Route is no longer advertised
assert.Equal(t, false, notAdvertisedRoutes[0].GetAdvertised())
assert.Equal(t, false, notAdvertisedRoutes[0].GetEnabled())
assert.Equal(t, true, notAdvertisedRoutes[0].GetIsPrimary())
assert.False(t, notAdvertisedRoutes[0].GetAdvertised())
assert.False(t, notAdvertisedRoutes[0].GetEnabled())
assert.True(t, notAdvertisedRoutes[0].GetIsPrimary())
// Advertise route again
command = []string{
@ -955,9 +955,9 @@ func TestEnableDisableAutoApprovedRoute(t *testing.T) {
assert.Len(t, reAdvertisedRoutes, 1)
// All routes should be auto approved and enabled
assert.Equal(t, true, reAdvertisedRoutes[0].GetAdvertised())
assert.Equal(t, true, reAdvertisedRoutes[0].GetEnabled())
assert.Equal(t, true, reAdvertisedRoutes[0].GetIsPrimary())
assert.True(t, reAdvertisedRoutes[0].GetAdvertised())
assert.True(t, reAdvertisedRoutes[0].GetEnabled())
assert.True(t, reAdvertisedRoutes[0].GetIsPrimary())
}
func TestAutoApprovedSubRoute2068(t *testing.T) {
@ -1163,9 +1163,9 @@ func TestSubnetRouteACL(t *testing.T) {
assert.Len(t, routes, 1)
for _, route := range routes {
assert.Equal(t, true, route.GetAdvertised())
assert.Equal(t, false, route.GetEnabled())
assert.Equal(t, false, route.GetIsPrimary())
assert.True(t, route.GetAdvertised())
assert.False(t, route.GetEnabled())
assert.False(t, route.GetIsPrimary())
}
// Verify that no routes has been sent to the client,
@ -1212,9 +1212,9 @@ func TestSubnetRouteACL(t *testing.T) {
assert.Len(t, enablingRoutes, 1)
// Node 1 has active route
assert.Equal(t, true, enablingRoutes[0].GetAdvertised())
assert.Equal(t, true, enablingRoutes[0].GetEnabled())
assert.Equal(t, true, enablingRoutes[0].GetIsPrimary())
assert.True(t, enablingRoutes[0].GetAdvertised())
assert.True(t, enablingRoutes[0].GetEnabled())
assert.True(t, enablingRoutes[0].GetIsPrimary())
// Verify that the client has routes from the primary machine
srs1, _ := subRouter1.Status()

View file

@ -14,12 +14,14 @@ import (
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/juanfont/headscale/integration/dockertestutil"
"github.com/juanfont/headscale/integration/dsic"
"github.com/juanfont/headscale/integration/hsic"
"github.com/juanfont/headscale/integration/tsic"
"github.com/ory/dockertest/v3"
"github.com/puzpuzpuz/xsync/v3"
"github.com/samber/lo"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/sync/errgroup"
"tailscale.com/envknob"
)
@ -140,6 +142,7 @@ type Scenario struct {
// TODO(kradalby): support multiple headcales for later, currently only
// use one.
controlServers *xsync.MapOf[string, ControlServer]
derpServers []*dsic.DERPServerInContainer
users map[string]*User
@ -203,11 +206,11 @@ func (s *Scenario) ShutdownAssertNoPanics(t *testing.T) {
if t != nil {
stdout, err := os.ReadFile(stdoutPath)
assert.NoError(t, err)
require.NoError(t, err)
assert.NotContains(t, string(stdout), "panic")
stderr, err := os.ReadFile(stderrPath)
assert.NoError(t, err)
require.NoError(t, err)
assert.NotContains(t, string(stderr), "panic")
}
@ -224,6 +227,13 @@ func (s *Scenario) ShutdownAssertNoPanics(t *testing.T) {
}
}
for _, derp := range s.derpServers {
err := derp.Shutdown()
if err != nil {
log.Printf("failed to tear down derp server: %s", err)
}
}
if err := s.pool.RemoveNetwork(s.network); err != nil {
log.Printf("failed to remove network: %s", err)
}
@ -352,7 +362,7 @@ func (s *Scenario) CreateTailscaleNodesInUser(
hostname := headscale.GetHostname()
opts = append(opts,
tsic.WithHeadscaleTLS(cert),
tsic.WithCACert(cert),
tsic.WithHeadscaleName(hostname),
)
@ -651,3 +661,20 @@ func (s *Scenario) WaitForTailscaleLogout() error {
return nil
}
// CreateDERPServer creates a new DERP server in a container.
func (s *Scenario) CreateDERPServer(version string, opts ...dsic.Option) (*dsic.DERPServerInContainer, error) {
derp, err := dsic.New(s.pool, version, s.network, opts...)
if err != nil {
return nil, fmt.Errorf("failed to create DERP server: %w", err)
}
err = derp.WaitForRunning()
if err != nil {
return nil, fmt.Errorf("failed to reach DERP server: %w", err)
}
s.derpServers = append(s.derpServers, derp)
return derp, nil
}

View file

@ -30,6 +30,7 @@ type TailscaleClient interface {
FQDN() (string, error)
Status(...bool) (*ipnstate.Status, error)
Netmap() (*netmap.NetworkMap, error)
DebugDERPRegion(region string) (*ipnstate.DebugDERPRegionReport, error)
Netcheck() (*netcheck.Report, error)
WaitForNeedsLogin() error
WaitForRunning() error

View file

@ -33,7 +33,7 @@ const (
defaultPingTimeout = 300 * time.Millisecond
defaultPingCount = 10
dockerContextPath = "../."
headscaleCertPath = "/usr/local/share/ca-certificates/headscale.crt"
caCertRoot = "/usr/local/share/ca-certificates"
dockerExecuteTimeout = 60 * time.Second
)
@ -71,7 +71,7 @@ type TailscaleInContainer struct {
fqdn string
// optional config
headscaleCert []byte
caCerts [][]byte
headscaleHostname string
withWebsocketDERP bool
withSSH bool
@ -93,11 +93,10 @@ type TailscaleInContainerBuildConfig struct {
// Tailscale instance.
type Option = func(c *TailscaleInContainer)
// WithHeadscaleTLS takes the certificate of the Headscale instance
// and adds it to the trusted surtificate of the Tailscale container.
func WithHeadscaleTLS(cert []byte) Option {
// WithCACert adds it to the trusted surtificate of the Tailscale container.
func WithCACert(cert []byte) Option {
return func(tsic *TailscaleInContainer) {
tsic.headscaleCert = cert
tsic.caCerts = append(tsic.caCerts, cert)
}
}
@ -126,7 +125,7 @@ func WithOrCreateNetwork(network *dockertest.Network) Option {
}
// WithHeadscaleName set the name of the headscale instance,
// mostly useful in combination with TLS and WithHeadscaleTLS.
// mostly useful in combination with TLS and WithCACert.
func WithHeadscaleName(hsName string) Option {
return func(tsic *TailscaleInContainer) {
tsic.headscaleHostname = hsName
@ -260,12 +259,8 @@ func New(
)
}
if tsic.headscaleHostname != "" {
tailscaleOptions.ExtraHosts = []string{
"host.docker.internal:host-gateway",
fmt.Sprintf("%s:host-gateway", tsic.headscaleHostname),
}
}
tailscaleOptions.ExtraHosts = append(tailscaleOptions.ExtraHosts,
"host.docker.internal:host-gateway")
if tsic.workdir != "" {
tailscaleOptions.WorkingDir = tsic.workdir
@ -351,8 +346,8 @@ func New(
tsic.container = container
if tsic.hasTLS() {
err = tsic.WriteFile(headscaleCertPath, tsic.headscaleCert)
for i, cert := range tsic.caCerts {
err = tsic.WriteFile(fmt.Sprintf("%s/user-%d.crt", caCertRoot, i), cert)
if err != nil {
return nil, fmt.Errorf("failed to write TLS certificate to container: %w", err)
}
@ -361,10 +356,6 @@ func New(
return tsic, nil
}
func (t *TailscaleInContainer) hasTLS() bool {
return len(t.headscaleCert) != 0
}
// Shutdown stops and cleans up the Tailscale container.
func (t *TailscaleInContainer) Shutdown() error {
err := t.SaveLog("/tmp/control")
@ -739,6 +730,34 @@ func (t *TailscaleInContainer) watchIPN(ctx context.Context) (*ipn.Notify, error
}
}
func (t *TailscaleInContainer) DebugDERPRegion(region string) (*ipnstate.DebugDERPRegionReport, error) {
if !util.TailscaleVersionNewerOrEqual("1.34", t.version) {
panic("tsic.DebugDERPRegion() called with unsupported version: " + t.version)
}
command := []string{
"tailscale",
"debug",
"derp",
region,
}
result, stderr, err := t.Execute(command)
if err != nil {
fmt.Printf("stderr: %s\n", stderr) // nolint
return nil, fmt.Errorf("failed to execute tailscale debug derp command: %w", err)
}
var report ipnstate.DebugDERPRegionReport
err = json.Unmarshal([]byte(result), &report)
if err != nil {
return nil, fmt.Errorf("failed to unmarshal tailscale derp region report: %w", err)
}
return &report, err
}
// Netcheck returns the current Netcheck Report (netcheck.Report) of the Tailscale instance.
func (t *TailscaleInContainer) Netcheck() (*netcheck.Report, error) {
command := []string{