Add Hetzner Cloud production infrastructure with multi-node support

- This commit introduces the Terraform configuration to provision a production environment on Hetzner Cloud, building on the existing test setup.
- Key improvements and new features include:
* **Multi-node clusters:** Scaling to 3-node Swarm application and database clusters for improved resilience.
* **High availability:** Utilizing a Hetzner Floating IP for the application entry point and `spread` placement groups for fault tolerance across physical hosts.
* **Enhanced network security:** Internal management services (RabbitMQ, APISIX, Prometheus, Grafana) are restricted to the application subnet, expected to be accessed via an internal reverse proxy (SWAG).
* **Internal database replication:** New firewall rules enable PostgreSQL replication and MongoDB replica set traffic within the database subnet.
* **Refined test environment:** Updates to align `test` configuration with the new `prod` structure, including a dedicated floating IP and adjusted firewall rules.
* **Configuration standardization:** Environment-specific details moved to `locals.tf` for clarity, with upgraded server types and migration to Rocky Linux as the base image.
- Updates were also made to the latest version of Terraform to ensure consistency in the documentation
This commit is contained in:
Murat ÖZDEMİR 2026-05-10 15:43:22 +03:00
parent 2d515f7206
commit 720c79d460
30 changed files with 872 additions and 164 deletions

View File

@ -6,31 +6,35 @@ Terraform/Ansible setup aşamalarından hangisinde ele alındığını gösterir
## TEST ortamı
| Roadmap adımı | Hangi aşamada ele alınmalı |
| ------------------------------------------------ | ----------------------------------------------------------------------------------------------------------- |
| ---------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- |
| Hetzner firewall (sadece 22/80/443) | **Terraform `01-test-terraform-iaac.md`**`firewall.tf` |
| Sunucu oluşturma (`test-swarm-01`, `test-db-01`) | **Terraform `01-test-terraform-iaac.md`**`servers.tf` |
| Private network + placement group | **Terraform `01-test-terraform-iaac.md`**`network.tf`, `placement.tf` |
| Sunucu oluşturma (`iklim-app-01`, `iklim-db-01`) | **Terraform `01-test-terraform-iaac.md`**`servers.tf` |
| Private network + placement group (`iklim-test-spread`) | **Terraform `01-test-terraform-iaac.md`**`network.tf`, `placement.tf` |
| Floating IP (`iklim-test-app-fip`) | **Terraform `01-test-terraform-iaac.md`**`floating_ip.tf` |
| Docker Engine kurulumu | **Ansible `02-test-ansible-bootstrap.md`**`docker` role |
| Security hardening (SSH, UFW, fail2ban) | **Ansible `02-test-ansible-bootstrap.md`**`hardening` role |
| Security hardening (SSH, firewalld, fail2ban) | **Ansible `02-test-ansible-bootstrap.md`**`hardening` role |
| Docker Swarm init (`init/swarm-init.sh`) | **Ansible `02-test-ansible-bootstrap.md`**`swarm` role (pipeline script idempotent çalışmaya devam eder) |
| `type=service` node label | **Ansible `02-test-ansible-bootstrap.md`**`swarm` role |
| `/opt/iklimco/...` dizinleri | **Ansible `02-test-ansible-bootstrap.md`**`node_dirs` role |
| StorageBox DAVFS mount (`u469968-sub1`) | **Ansible `02-test-ansible-bootstrap.md`**`storagebox` role |
| `act_runner` systemd kurulumu | **Ansible `03-test-runner-ve-deploy-onkosullari.md`**`gitea_runner` role |
| GoDaddy credentials storagebox'a yükleme | **Manuel kalır** — secret yönetimi, Terraform/Ansible dışı |
## PROD ortamı
| Roadmap adımı | Hangi aşamada ele alınmalı |
| ----------------------------------------------- | ------------------------------------------------------------------------ |
| 6 sunucu oluşturma (3 swarm + 3 db) | **Terraform `04-prod-terraform-iaac.md`**`servers.tf` |
| -------------------------------------------------------------------- | ------------------------------------------------------------------------ |
| 6 sunucu oluşturma (`iklim-app-01/02/03`, `iklim-db-01/02/03`) | **Terraform `04-prod-terraform-iaac.md`**`servers.tf` |
| Private network + 2 placement group | **Terraform `04-prod-terraform-iaac.md`**`network.tf`, `placement.tf` |
| Firewall (sadece 22/80/443 public) | **Terraform `04-prod-terraform-iaac.md`**`firewall.tf` |
| Docker Engine kurulumu (`prod-swarm-*`) | **Ansible `05-prod-ansible-bootstrap.md`**`docker` role |
| Floating IP (`iklim-prod-app-fip`, `iklim-app-01`'e atanır) | **Terraform `04-prod-terraform-iaac.md`**`floating_ip.tf` |
| Docker Engine kurulumu (`iklim-app-*`) | **Ansible `05-prod-ansible-bootstrap.md`**`docker` role |
| Security hardening (tüm node'lar) | **Ansible `05-prod-ansible-bootstrap.md`**`hardening` role |
| Swarm init (`prod-swarm-01`) | **Ansible `05-prod-ansible-bootstrap.md`**`swarm` role |
| Manager join (`prod-swarm-02`, `prod-swarm-03`) | **Ansible `05-prod-ansible-bootstrap.md`**`swarm` role |
| Swarm init (`iklim-app-01`) | **Ansible `05-prod-ansible-bootstrap.md`**`swarm` role |
| Manager join (`iklim-app-02`, `iklim-app-03`) | **Ansible `05-prod-ansible-bootstrap.md`**`swarm` role |
| `type=service` node label (3 swarm node) | **Ansible `05-prod-ansible-bootstrap.md`**`swarm` role |
| `/opt/iklimco/...` dizinleri | **Ansible `05-prod-ansible-bootstrap.md`**`node_dirs` role |
| StorageBox DAVFS mount (`u469968-sub2`) | **Ansible `05-prod-ansible-bootstrap.md`**`storagebox` role |
| 3× `act_runner` systemd (HA runner) | **Ansible `06-prod-runner-ha-ve-swarm.md`**`gitea_runner` role |
| GoDaddy credentials storagebox'a yükleme | **Manuel kalır** — secret yönetimi, Terraform/Ansible dışı |
| DB node'ları Swarm'a join | **Kapsam dışı** — DB cluster ayrı yönetilir |
@ -48,8 +52,8 @@ Environment_Infrastructure/
05-prod-ansible-bootstrap.md
06-prod-runner-ha-ve-swarm.md
07-private-network-port-matrisi.md
roadmap/
roadmap/
test-env/ ← Test ortamı Roadmap adımları
prod-env/ ← Prod Roadmap adımları
setup-vs-technical-debt-map.md ← Bu dosya
setup-vs-roadmap-map.md ← Bu dosya
```

View File

@ -42,8 +42,8 @@ Test ortami minimum topoloji:
| Node | Rol | Not |
| --- | --- | --- |
| `test-swarm-01` | Swarm manager + app worker + Gitea runner | CI/CD test deploy bu node uzerinden calisir |
| `test-db-01` | DB node | DB altyapisi manuel kurulacak; Gitea CI/CD ile kurulmayacak |
| `iklim-app-01` | Swarm manager + app worker + Gitea runner | CI/CD test deploy bu node uzerinden calisir |
| `iklim-db-01` | DB node | DB altyapisi manuel kurulacak; Gitea CI/CD ile kurulmayacak |
Test DB kurulumu Terraform/Ansible ile sadece makine ve OS hazirligina kadar getirilir. PostgreSQL/MongoDB cluster kurulumu bu asamanin disindadir.
@ -53,8 +53,8 @@ Prod ortami HA topoloji:
| Node grubu | Adet | Rol |
| --- | ---: | --- |
| `prod-swarm-*` | 3 | Her biri Swarm manager + app worker |
| `prod-db-*` | 3 | DB cluster node'lari |
| `iklim-app-*` | 3 | Her biri Swarm manager + app worker |
| `iklim-db-*` | 3 | DB cluster node'lari |
Prod DB altyapisi manuel kurulacak; Gitea CI/CD ile kurulmayacak. Terraform DB makinelerini ve network/firewall kurallarini hazirlar, Ansible OS hardening ve temel bagimliliklari kurar.
@ -88,12 +88,12 @@ Tercih edilen kurulum:
Prod HA icin `act_runner` tek makineye degil, 3 Swarm manager node'unun tamamına kurulacaktir. Boylece bir manager/runner kaybedildiginde pipeline calismaya devam edebilir. Runner label'lari hem ortak hem node-spesifik olmalidir:
- Ortak: `prod-runner`
- Node spesifik: `prod-swarm-01`, `prod-swarm-02`, `prod-swarm-03`
- Node spesifik: `iklim-app-01`, `iklim-app-02`, `iklim-app-03`
Test icin tek runner yeterlidir:
- Ortak: `test-runner`
- Node spesifik: `test-swarm-01`
- Node spesifik: `iklim-app-01`
## Deploy Lock Karari
@ -147,12 +147,12 @@ Kisitlar:
Prod icin en az iki placement group onerilir:
- `prod-swarm-spread`: 3 Swarm manager/app node
- `prod-db-spread`: 3 DB node
- `iklim-prod-app-spread`: 3 Swarm manager/app node
- `iklim-prod-db-spread`: 3 DB node
Test icin opsiyonel:
- `test-spread`: `test-swarm-01` ve `test-db-01`
- `iklim-test-spread`: `iklim-app-01` ve `iklim-db-01`
Kaynaklar:

View File

@ -14,10 +14,11 @@ Terraform test ortaminda sunlari olusturur:
- Public ingress: sadece `22/tcp`, `80/tcp`, `443/tcp`
- Private ingress: `07-private-network-port-matrisi.md` dosyasindaki test kurallari
- SSH key
- Placement group: `test-spread`
- Placement group: `iklim-test-spread`
- Floating IP: swarm entry point icin sabit IPv4
- Server:
- `test-swarm-01`
- `test-db-01`
- `iklim-app-01`
- `iklim-db-01`
- Ansible inventory output
Terraform DB yazilimini kurmaz. DB node sadece makine, network ve firewall seviyesinde hazirlanir.
@ -48,23 +49,24 @@ Minimum degiskenler:
```hcl
hcloud_token = "secret"
environment = "test"
location = "fsn1"
image = "ubuntu-24.04"
image = "rocky-10"
server_type_swarm = "cx32"
server_type_db = "cx42"
admin_ssh_public_key_path = "~/.ssh/id_ed25519.pub"
admin_allowed_cidrs = ["X.X.X.X/32"]
```
`environment` sabiti `locals.tf` icindedir; `tfvars` ile override edilmez.
`location` icin tek lokasyonla baslanir. Farkli region/lokasyon felaket kurtarma bu asamada konu disidir; ileride dokumana eklenmelidir.
## Server Rolleri
| Server | Private IP | Rol |
| --- | --- | --- |
| `test-swarm-01` | `10.10.10.11` | Swarm manager + app worker + Gitea runner |
| `test-db-01` | `10.10.20.11` | Manuel DB kurulumu icin hazir DB node |
| `iklim-app-01` | `10.10.10.11` | Swarm manager + app worker + Gitea runner |
| `iklim-db-01` | `10.10.20.11` | Manuel DB kurulumu icin hazir DB node |
Private IP'ler Terraform icinde sabit tanimlanmalidir. Ansible inventory ve firewall kurallari deterministik kalir.
@ -75,16 +77,30 @@ Public ingress:
| Port | Kaynak | Hedef |
| --- | --- | --- |
| `22/tcp` | `admin_allowed_cidrs` | Tum test node'lari |
| `80/tcp` | `0.0.0.0/0`, `::/0` | `test-swarm-01` |
| `443/tcp` | `0.0.0.0/0`, `::/0` | `test-swarm-01` |
| `80/tcp` | `0.0.0.0/0`, `::/0` | `iklim-app-01` |
| `443/tcp` | `0.0.0.0/0`, `::/0` | `iklim-app-01` |
Public ingress icin `8200/tcp`, `5432/tcp`, `27017/tcp`, `5672/tcp`, `15672/tcp`, `6379/tcp`, `2379/tcp`, `9180/tcp`, `9090/tcp`, `3000/tcp` acilmayacak.
Public ingress icin `8200/tcp`, `5432/tcp`, `27017/tcp`, `5672/tcp`, `15672/tcp`, `6379/tcp`, `2379/tcp`, `9000/tcp`, `9180/tcp`, `9090/tcp`, `3000/tcp` acilmayacak.
Private ingress icin `07-private-network-port-matrisi.md` kaynak alinacak.
Private ingress (app subnet `10.10.10.0/24` kaynakli):
| Port | Servis | Erisim yontemi |
| --- | --- | --- |
| `15672/tcp` | RabbitMQ Management | SWAG arkasinda `443` — IP kisitli |
| `9090/tcp` | Prometheus | SWAG arkasinda `443` — IP kisitli |
| `3000/tcp` | Grafana | SWAG arkasinda `443` — IP kisitli |
| `9000/tcp` | APISIX Dashboard | SWAG arkasinda `443` — IP kisitli |
| `9180/tcp` | APISIX Admin API | Docker overlay icinden sadece Dashboard erisir; insan erisimi gerekmez |
| `8200/tcp` | Vault | Docker overlay / private network |
IP kisitlamasi Hetzner firewall'da degil, SWAG nginx konfigurasyonunda yapilir.
Bu portlarin hicbiri `admin_allowed_cidrs` kaynagiyla public'ten acilmaz.
Diger private ingress kurallari icin `07-private-network-port-matrisi.md` kaynak alinacak.
## Placement Group
`test-spread` placement group `type = "spread"` olacak. Testte iki server oldugu icin bu grup `test-swarm-01` ve `test-db-01` makinelerinin farkli fiziksel host'lara dagitilmasini hedefler.
`iklim-test-spread` placement group `type = "spread"` olacak. Testte iki server oldugu icin bu grup `iklim-app-01` ve `iklim-db-01` makinelerinin farkli fiziksel host'lara dagitilmasini hedefler.
Not: Spread placement group farkli kabinet veya lokasyon garantisi degildir; tek fiziksel host arizasinin etkisini azaltir.

View File

@ -6,8 +6,8 @@ Bu asamanin amaci Terraform ile olusturulan test makinelerini Linux, hardening,
| Host | Rol |
| --- | --- |
| `test-swarm-01` | Swarm manager + app worker |
| `test-db-01` | Manuel DB kurulumu icin OS-hardening uygulanmis DB node |
| `iklim-app-01` | Swarm manager + app worker |
| `iklim-db-01` | Manuel DB kurulumu icin OS-hardening uygulanmis DB node |
## Onerilen Dosya Yapisi
@ -28,13 +28,14 @@ ansible/
docker/
swarm/
node_dirs/
storagebox/
```
## Base Role
Tum test node'larina uygulanir:
- `apt update`
- `dnf update`
- temel paketler:
- `curl`
- `wget`
@ -42,9 +43,6 @@ Tum test node'larina uygulanir:
- `jq`
- `unzip`
- `ca-certificates`
- `gnupg`
- `lsb-release`
- `ufw`
- `fail2ban`
- `chrony`
- `python3`
@ -63,22 +61,21 @@ Tum test node'larina uygulanir:
- `PermitEmptyPasswords no`
- `MaxAuthTries 3`
- `fail2ban` SSH jail aktif edilir.
- `unattended-upgrades` aktif edilir.
- UFW default:
- incoming: deny
- `dnf-automatic` ile otomatik guvenlik guncellestirmeleri aktif edilir.
- `firewalld` default:
- incoming: deny (drop zone)
- outgoing: allow
- Public SSH sadece admin CIDR'dan acilir.
Not: Docker iptables kurallari UFW ile etkilesebilir. Hetzner Cloud firewall asil dis perimeter kabul edilir; UFW host icinde ikinci katman olarak kullanilir.
Not: Docker iptables kurallari firewalld ile etkilesebilir. Hetzner Cloud firewall asil dis perimeter kabul edilir; firewalld host icinde ikinci katman olarak kullanilir.
## Docker Role
Sadece `test-swarm-01` uzerinde zorunludur. `test-db-01` uzerinde DB manual kurulum stratejisine gore opsiyonel tutulabilir.
Sadece `iklim-app-01` uzerinde zorunludur. `iklim-db-01` uzerinde DB manual kurulum stratejisine gore opsiyonel tutulabilir.
Docker kurulumu resmi Docker apt repository uzerinden yapilir:
Docker kurulumu resmi Docker dnf repository uzerinden yapilir:
- Docker GPG key
- Docker apt source
- Docker GPG key + dnf repository (`https://download.docker.com/linux/rhel/docker-ce.repo`)
- paketler:
- `docker-ce`
- `docker-ce-cli`
@ -91,7 +88,7 @@ Docker convenience script kullanilmayacak. Production benzeri test ortami icin p
## Swarm Role
`test-swarm-01` uzerinde:
`iklim-app-01` uzerinde:
- `docker swarm init`
- advertise addr: `10.10.10.11`
@ -102,7 +99,7 @@ Docker convenience script kullanilmayacak. Production benzeri test ortami icin p
- attachable: `true`
- Node `type=service` label'i ile isaretlenir:
```bash
docker node update --label-add type=service test-swarm-01
docker node update --label-add type=service iklim-app-01
```
- Node `AVAILABILITY=Active` kalir (drain edilmez); tek node hem manager hem worker'dir.
@ -110,7 +107,7 @@ Test tek node Swarm oldugu icin join token kullanimi yoktur.
## Node Directory Role
`test-swarm-01` uzerinde deploy on kosullari:
`iklim-app-01` uzerinde deploy on kosullari:
```text
/opt/iklimco
@ -128,13 +125,138 @@ DB node uzerinde manuel DB kurulumu icin minimum:
/opt/iklimco/backup
```
## StorageBox DAVFS Mount Role
Her iki node'a uygulanir (`iklim-app-01` ve `iklim-db-01`).
### Amac
Hetzner StorageBox'u WebDAV (DAVFS) protokolü üzerinden `/mnt/storagebox` olarak mount eder. Docker volume'lari bu dizine baglanarak veri kaliciligini ve yedeklemeyi saglar.
### Test Ortami Sub-Account
| Parametre | Degisken | Deger |
| --- | --- | --- |
| Ana hesap | `storagebox_account` | `u469968` |
| Sub-account | `storagebox_user` | `u469968-sub1` |
| WebDAV URL | `storagebox_url` | `https://u469968-sub1.your-storagebox.de/` |
| Mount point | `storagebox_mount_point` | `/mnt/storagebox` |
### Role Degiskenleri
`group_vars/all.yml` — tum ortamlar icin ortak:
```yaml
storagebox_account: "u469968"
```
`group_vars/test.yml` — test ortamina ozgu; user ve url account'tan turetilir:
```yaml
storagebox_user: "{{ storagebox_account }}-sub1"
storagebox_url: "https://{{ storagebox_user }}.your-storagebox.de/"
storagebox_password: "{{ vault_storagebox_password }}" # Ansible Vault ile saklanir
storagebox_mount_point: "/mnt/storagebox"
```
Prod ortaminda yalnizca suffix degisir (`sub1``sub2`), geri kalan her sey turetilir.
`vault_storagebox_password` degeri Ansible Vault ile sifreli `group_vars/test-vault.yml` icinde tutulur:
```bash
# Sifreleme
ansible-vault encrypt group_vars/test-vault.yml
# Duzenleme
ansible-vault edit group_vars/test-vault.yml
```
`test-vault.yml` icerigi:
```yaml
vault_storagebox_password: "SUB_ACCOUNT_PAROLASI"
```
### Adimlar
1. **davfs2 kurulumu**
```yaml
- name: Install davfs2
ansible.builtin.dnf:
name: davfs2
state: present
```
2. **Kimlik bilgileri dosyasi** (`/etc/davfs2/secrets`)
```yaml
- name: Configure davfs2 secrets
ansible.builtin.lineinfile:
path: /etc/davfs2/secrets
line: "{{ storagebox_url }} {{ storagebox_user }} {{ storagebox_password }}"
create: yes
mode: "0600"
owner: root
group: root
```
3. **Mount point olustur**
```yaml
- name: Create mount point
ansible.builtin.file:
path: "{{ storagebox_mount_point }}"
state: directory
mode: "0755"
```
4. **fstab kaydı**
```yaml
- name: Add fstab entry
ansible.builtin.lineinfile:
path: /etc/fstab
line: >-
{{ storagebox_url }} {{ storagebox_mount_point }} davfs
_netdev,auto,user,rw,uid=root,gid=root 0 0
state: present
```
5. **Mount et**
```yaml
- name: Mount StorageBox
ansible.builtin.command: mount {{ storagebox_mount_point }}
args:
creates: "{{ storagebox_mount_point }}/.mounted_marker"
```
Mount basarisi icin dizine bir marker dosyasi yazilabilir:
```yaml
- name: Write mount marker
ansible.builtin.copy:
content: "mounted by ansible"
dest: "{{ storagebox_mount_point }}/.mounted_marker"
```
### Notlar
- `davfs2` paketi EPEL repository'sinde bulunur; base role'de `dnf install epel-release` yapilmalidir.
- StorageBox sifreleri asla plaintext olarak repository'e eklenmez; Ansible Vault zorunludur.
- Mount noktasi reboot'ta `_netdev` flag'i sayesinde network hazir olduktan sonra otomatik mount edilir.
- Docker volume'lari bu dizin altindaki bir alt klasore yonlendirilir, ornegin `/mnt/storagebox/volumes/`.
## Kabul Kriterleri
- `ansible -i inventory/generated/test.yml all -m ping` basarili olur.
- `test-swarm-01` uzerinde `docker info` calisir.
- `test-swarm-01` uzerinde Swarm active olur; node `AVAILABILITY=Active` (drain degil).
- `iklim-app-01` uzerinde `docker info` calisir.
- `iklim-app-01` uzerinde Swarm active olur; node `AVAILABILITY=Active` (drain degil).
- `docker network ls` icinde `iklimco-net` gorulur.
- `docker node inspect test-swarm-01 --format '{{.Spec.Labels}}'` ciktisi `map[type:service]` icerir.
- `test-db-01` uzerinde public DB portu acik degildir.
- Public portlar Hetzner firewall + UFW seviyesinde `22`, `80`, `443` ile sinirlidir.
- `docker node inspect iklim-app-01 --format '{{.Spec.Labels}}'` ciktisi `map[type:service]` icerir.
- `iklim-db-01` uzerinde public DB portu acik degildir.
- Public portlar Hetzner firewall + firewalld seviyesinde `22`, `80`, `443` ile sinirlidir.
- Her iki node'da `mount | grep storagebox` StorageBox mount'unu gosterir.
- `ls /mnt/storagebox/.mounted_marker` basarili olur.
- Reboot sonrasi mount otomatik olarak geri gelir.

View File

@ -8,7 +8,7 @@ Test ortaminda tek runner yeterlidir:
| Host | Runner |
| --- | --- |
| `test-swarm-01` | `act_runner` systemd servisi |
| `iklim-app-01` | `act_runner` systemd servisi |
Runner Docker container olarak calistirilmayacak. `/var/run/docker.sock` bir runner container'ina mount edilmeyecek.
@ -51,7 +51,7 @@ Test runner label'lari:
```text
test-runner
test-swarm-01
iklim-app-01
ubuntu-24.04
docker
swarm-manager
@ -61,7 +61,7 @@ Mevcut workflow'larda `runs-on` degeri test icin bu label'lardan biriyle uyumlu
## Deploy On Kosullari
Test deploy pipeline'lari icin `test-swarm-01` uzerinde bulunmasi gerekenler:
Test deploy pipeline'lari icin `iklim-app-01` uzerinde bulunmasi gerekenler:
- Docker Engine
- Docker Compose plugin

View File

@ -15,15 +15,16 @@ Terraform prod ortaminda sunlari olusturur:
- Private ingress: `07-private-network-port-matrisi.md` dosyasindaki prod kurallari
- SSH key
- Placement groups:
- `prod-swarm-spread`
- `prod-db-spread`
- `iklim-prod-app-spread`
- `iklim-prod-db-spread`
- Floating IP: app entry point icin sabit IPv4 (`iklim-app-01`'e atanir)
- Servers:
- `prod-swarm-01`
- `prod-swarm-02`
- `prod-swarm-03`
- `prod-db-01`
- `prod-db-02`
- `prod-db-03`
- `iklim-app-01`
- `iklim-app-02`
- `iklim-app-03`
- `iklim-db-01`
- `iklim-db-02`
- `iklim-db-03`
- Ansible inventory output
DB cluster yazilimi Terraform ile kurulmayacak. DB node'lari sadece makine, network ve firewall seviyesinde hazirlanacak.
@ -42,6 +43,7 @@ terraform/
firewall.tf
placement.tf
servers.tf
floating_ip.tf
outputs.tf
terraform.tfvars.example
```
@ -50,13 +52,14 @@ terraform/
## Degiskenler
`environment` sabiti `locals.tf` icindedir; `tfvars` ile override edilmez.
Minimum degiskenler:
```hcl
hcloud_token = "secret"
environment = "prod"
location = "fsn1"
image = "ubuntu-24.04"
image = "rocky-10"
server_type_swarm = "cx42"
server_type_db = "cx52"
admin_ssh_public_key_path = "~/.ssh/id_ed25519.pub"
@ -65,26 +68,26 @@ admin_allowed_cidrs = ["X.X.X.X/32"]
Server type degerleri kapasiteye gore degisebilir. Bu dokuman topoloji ve guvenlik kararini tanimlar; sizing daha sonra revize edilebilir.
## Server Rolleri ve Private IP PlanI
## Server Rolleri ve Private IP Plani
| Server | Private IP | Rol |
| --- | --- | --- |
| `prod-swarm-01` | `10.20.10.11` | Swarm manager + app worker + runner |
| `prod-swarm-02` | `10.20.10.12` | Swarm manager + app worker + runner |
| `prod-swarm-03` | `10.20.10.13` | Swarm manager + app worker + runner |
| `prod-db-01` | `10.20.20.11` | Manuel DB cluster node |
| `prod-db-02` | `10.20.20.12` | Manuel DB cluster node |
| `prod-db-03` | `10.20.20.13` | Manuel DB cluster node |
| `iklim-app-01` | `10.20.10.11` | Swarm manager + app worker + runner (primary, FIP alir) |
| `iklim-app-02` | `10.20.10.12` | Swarm manager + app worker + runner |
| `iklim-app-03` | `10.20.10.13` | Swarm manager + app worker + runner |
| `iklim-db-01` | `10.20.20.11` | Manuel DB cluster node |
| `iklim-db-02` | `10.20.20.12` | Manuel DB cluster node |
| `iklim-db-03` | `10.20.20.13` | Manuel DB cluster node |
Private IP'ler sabit tanimlanmalidir.
Private IP'ler `locals.tf` icinde `swarm_private_ips` ve `db_private_ips` map'leri olarak sabit tanimlanir. Sunucu listesi `for_each` ile bu map'lerden turetilir.
## Placement Group Karari
Prod icin iki ayri spread placement group:
```text
prod-swarm-spread: prod-swarm-01/02/03
prod-db-spread: prod-db-01/02/03
iklim-prod-app-spread: iklim-app-01/02/03
iklim-prod-db-spread: iklim-db-01/02/03
```
Bu sayede Swarm quorum node'lari kendi aralarinda farkli fiziksel host'lara, DB node'lari da kendi aralarinda farkli fiziksel host'lara yerlestirilmeye calisilir.
@ -96,6 +99,10 @@ Notlar:
- Farkli lokasyon/region felaket kurtarma bu asamada konu disidir.
- Ileride scale buyudugunde multi-location DR ayri tasarlanmalidir.
## Floating IP
`iklim-prod-app-fip` adli IPv4 floating IP olusturulur ve `iklim-app-01`'e atanir. DNS A kaydi bu IP'ye yonlendirilir. Failover gerekirse floating IP baska bir app node'una tasinabilir.
## Public Firewall
Public ingress:
@ -103,8 +110,8 @@ Public ingress:
| Port | Kaynak | Hedef |
| --- | --- | --- |
| `22/tcp` | `admin_allowed_cidrs` | Tum prod node'lari |
| `80/tcp` | `0.0.0.0/0`, `::/0` | Prod gateway entrypoint |
| `443/tcp` | `0.0.0.0/0`, `::/0` | Prod gateway entrypoint |
| `80/tcp` | `0.0.0.0/0`, `::/0` | `iklim-app-*` (Floating IP uzerinden) |
| `443/tcp` | `0.0.0.0/0`, `::/0` | `iklim-app-*` (Floating IP uzerinden) |
Prod'da su portlar public acilmayacak:
@ -118,15 +125,78 @@ Prod'da su portlar public acilmayacak:
- `9090/tcp` Prometheus
- `3000/tcp` Grafana
Bu servisler gerekiyorsa private network, VPN, bastion veya admin CIDR ile sinirlandirilmis ek kural uzerinden erisilebilir. Varsayilan public politika kapali kalir.
## Private Firewall
Private ingress (app subnet `10.20.10.0/24` kaynakli):
| Port | Servis | Erisim yontemi |
| --- | --- | --- |
| `15672/tcp` | RabbitMQ Management | SWAG arkasinda `443` — IP kisitli |
| `9090/tcp` | Prometheus | SWAG arkasinda `443` — IP kisitli |
| `3000/tcp` | Grafana | SWAG arkasinda `443` — IP kisitli |
| `9000/tcp` | APISIX Dashboard | SWAG arkasinda `443` — IP kisitli |
| `9180/tcp` | APISIX Admin API | Docker overlay icinden sadece Dashboard erisir |
| `8200/tcp` | Vault | Docker overlay / private network |
| `2377/tcp` | Docker Swarm control plane | App subnet icinden |
| `7946/tcp`, `7946/udp` | Docker Swarm node discovery | App subnet icinden |
| `4789/udp` | Docker Swarm VXLAN overlay | App subnet icinden |
| `6379/tcp` | Redis | App subnet icinden |
| `5672/tcp` | RabbitMQ AMQP | App subnet icinden |
| `61613/tcp` | RabbitMQ STOMP | App subnet icinden |
| `15674/tcp` | RabbitMQ Web STOMP | App subnet icinden |
DB firewall ek kurallar (db subnet `10.20.20.0/24` kaynakli):
| Port | Servis | Kural |
| --- | --- | --- |
| `5432/tcp` | PostgreSQL replication | DB subnet icinden |
| `27017/tcp` | MongoDB replica set | DB subnet icinden |
IP kisitlamasi Hetzner firewall'da degil, SWAG nginx konfigurasyonunda yapilir.
## Lifecycle ve Resize Politikasi
### server_type Degisikligi (Yeniden Boyutlandirma)
`server_type` degistirmek Terraform destroy+create **tetiklemez**. `hcloud` provider
bunu natively destekler: sunucuyu durdurur, Hetzner Resize API'sini cagirir,
yeniden baslatir. `terraform.tfvars` icinde degeri guncelle, `terraform apply` calistir.
Downtime olur (sunucu durur ve baslar) ancak disk, kurulu yazilim ve Docker volumes
korunur. `ignore_changes` veya manuel adim gerekmez.
### Hangi Degisiklikler Sunucuyu Zorla Yeniden Olusturur?
| Degisen alan | Davranis | Not |
| --- | --- | --- |
| `server_type` | In-place resize (provider native) | `terraform apply` yeterli |
| `hcloud_server_network` | Sadece attachment guncellenir | Ayri resource kullanildigi icin |
| `hcloud_firewall_attachment` | Sadece attachment guncellenir | Ayri resource kullanildigi icin |
| `placement_group_id` | Hetzner API degisime izin vermiyor → destroy+create | Degistirme |
| `image` | Disk imaji degisir → destroy+create | Degistirme |
| `location` | Baska datacenter'a tasinamaz → destroy+create | Degistirme |
### Network ve Firewall Attachment Ayrimi
`network` blogu ve `firewall_ids` `hcloud_server` icine gomulmez. Bunun yerine
ayri resource tanimlanir:
- `hcloud_server_network` — private IP atamasi (`for_each` ile her node icin)
- `hcloud_firewall_attachment` — firewall iliskisi (`for_each` ile turetilen server listesi)
### prevent_destroy Korumasi
Her sunucuya `lifecycle { prevent_destroy = true }` eklenir. Kasitli silmek icin
once lifecycle blogunu gecici olarak kaldir.
## Kabul Kriterleri
- `terraform plan` sadece prod Hetzner Project token'i ile calisir.
- 6 server olusur.
- Swarm node'lari `prod-swarm-spread` placement group icindedir.
- DB node'lari `prod-db-spread` placement group icindedir.
- 6 server olusur (`iklim-app-01/02/03`, `iklim-db-01/02/03`).
- Swarm node'lari `iklim-prod-app-spread` placement group icindedir.
- DB node'lari `iklim-prod-db-spread` placement group icindedir.
- Public firewall sadece `22`, `80`, `443` ingress'e izin verir.
- Private firewall `07-private-network-port-matrisi.md` ile uyumludur.
- DB replication portlari yalnizca DB subnet'ten erisilebilir.
- Floating IP olusur ve `iklim-app-01`'e atanir.
- Terraform state ve secret tfvars commit edilmez.

View File

@ -6,12 +6,12 @@ Bu asamanin amaci Terraform ile olusturulan prod makinelerini Linux, security ha
| Host | Rol |
| --- | --- |
| `prod-swarm-01` | Swarm manager + app worker |
| `prod-swarm-02` | Swarm manager + app worker |
| `prod-swarm-03` | Swarm manager + app worker |
| `prod-db-01` | Manuel DB cluster node |
| `prod-db-02` | Manuel DB cluster node |
| `prod-db-03` | Manuel DB cluster node |
| `iklim-app-01` | Swarm manager + app worker |
| `iklim-app-02` | Swarm manager + app worker |
| `iklim-app-03` | Swarm manager + app worker |
| `iklim-db-01` | Manuel DB cluster node |
| `iklim-db-02` | Manuel DB cluster node |
| `iklim-db-03` | Manuel DB cluster node |
## Onerilen Dosya Yapisi
@ -76,7 +76,7 @@ Hetzner Cloud Firewall asil perimeter kabul edilir. UFW host uzerinde ikinci sav
## Docker Role
Sadece `prod-swarm-*` node'larinda zorunludur.
Sadece `iklim-app-*` node'larinda zorunludur.
Kurulacak paketler:
@ -94,27 +94,27 @@ DB node'larinda Docker zorunlu degildir. DB manuel kurulum stratejisi container
Prod Swarm 3 manager ile kurulacak:
1. `prod-swarm-01` uzerinde `docker swarm init`
1. `iklim-app-01` uzerinde `docker swarm init`
2. Advertise/data path addr: `10.20.10.11`
3. Manager join token alinir.
4. `prod-swarm-02` ve `prod-swarm-03` manager olarak join olur.
4. `iklim-app-02` ve `iklim-app-03` manager olarak join olur.
5. Overlay network olusturulur:
- `iklimco-net`
- driver: `overlay`
- attachable: `true`
6. Tum 3 node `type=service` label'i ile isaretlenir:
```bash
for node in prod-swarm-01 prod-swarm-02 prod-swarm-03; do
for node in iklim-app-01 iklim-app-02 iklim-app-03; do
docker node update --label-add type=service "$node"
done
```
7. Hicbir node drain edilmez. 3 node da `AVAILABILITY=Active` kalir; hem manager hem app worker olarak calisir.
> DB node'lari (`prod-db-*`) Swarm'a join ettirilmez. DB cluster ayri yonetilir.
> DB node'lari (`iklim-db-*`) Swarm'a join ettirilmez. DB cluster ayri yonetilir.
## Node Directory Role
Tum `prod-swarm-*` node'larinda:
Tum `iklim-app-*` node'larinda:
```text
/opt/iklimco
@ -138,7 +138,7 @@ DB node'larinda manuel DB kurulumu icin:
- 3 Swarm node `docker node ls` icinde manager olarak gorunur; hepsi `AVAILABILITY=Active`.
- Manager quorum saglanir (3 manager, 1 kayip tolere edilir).
- `iklimco-net` overlay network vardir.
- `docker node inspect prod-swarm-01 --format '{{.Spec.Labels}}'` ciktisi `map[type:service]` icerir.
- `docker node inspect iklim-app-01 --format '{{.Spec.Labels}}'` ciktisi `map[type:service]` icerir.
- DB node'lari `docker node ls` ciktisinda gorunmez.
- Public firewall sadece `22`, `80`, `443` ingress'e izin verir.
- DB node'lari public DB portu acmaz.

View File

@ -8,9 +8,9 @@ Tek runner fonksiyonel olarak yeterlidir, ancak HA degildir. Prod hedefi HA oldu
| Host | Runner |
| --- | --- |
| `prod-swarm-01` | `act_runner` systemd |
| `prod-swarm-02` | `act_runner` systemd |
| `prod-swarm-03` | `act_runner` systemd |
| `iklim-app-01` | `act_runner` systemd |
| `iklim-app-02` | `act_runner` systemd |
| `iklim-app-03` | `act_runner` systemd |
Bu modelde herhangi bir manager/runner kaybedilirse diger runner'lar pipeline job'larini alabilir.
@ -42,9 +42,9 @@ ubuntu-24.04
Node-spesifik label'lar:
```text
prod-swarm-01
prod-swarm-02
prod-swarm-03
iklim-app-01
iklim-app-02
iklim-app-03
```
Mevcut prod workflow'lari `runs-on: prod-runner` kullaniyorsa 3 runner'dan herhangi biri job'u alabilir. Belirli bir node'a sabitlemek gerekirse node-spesifik label kullanilir.

View File

@ -10,14 +10,14 @@ Bu matris Terraform Hetzner firewall ve Ansible UFW kurallari icin kaynak kabul
| Subnet | CIDR | Amac |
| --- | --- | --- |
| App/Swarm | `10.10.10.0/24` | `test-swarm-01` |
| App/Swarm | `10.10.10.0/24` | `iklim-app-01` |
| DB | `10.10.20.0/24` | `test-db-01` |
### Prod
| Subnet | CIDR | Amac |
| --- | --- | --- |
| App/Swarm | `10.20.10.0/24` | `prod-swarm-01/02/03` |
| App/Swarm | `10.20.10.0/24` | `iklim-app-01/02/03` |
| DB | `10.20.20.0/24` | `prod-db-01/02/03` |
## Public Ingress Standardi
@ -57,7 +57,7 @@ Docker Swarm node'lari arasinda zorunlu portlar:
Testte bu portlar fiilen tek Swarm node icin gerekli olsa da ileride worker eklemeyi kolaylastirmak icin app subnet icinde tanimlanabilir.
Prod'da `10.20.10.0/24` app/swarm subnet icinde bu portlar tum `prod-swarm-*` node'lari arasinda acik olmalidir.
Prod'da `10.20.10.0/24` app/swarm subnet icinde bu portlar tum `iklim-app-*` node'lari arasinda acik olmalidir.
Kaynak: Docker overlay network dokumani, https://docs.docker.com/engine/network/drivers/overlay/

View File

@ -0,0 +1,195 @@
# Swarm node firewall public HTTP/HTTPS + private infra services
resource "hcloud_firewall" "swarm" {
name = "${local.name_prefix}-firewall-app"
# SSH admin CIDRs only
rule {
direction = "in"
protocol = "tcp"
port = "22"
source_ips = var.admin_allowed_cidrs
}
# HTTP public
rule {
direction = "in"
protocol = "tcp"
port = "80"
source_ips = ["0.0.0.0/0", "::/0"]
}
# HTTPS public
rule {
direction = "in"
protocol = "tcp"
port = "443"
source_ips = ["0.0.0.0/0", "::/0"]
}
# Docker Swarm control plane
rule {
direction = "in"
protocol = "tcp"
port = "2377"
source_ips = [local.app_subnet_cidr]
}
# Docker Swarm node discovery (TCP)
rule {
direction = "in"
protocol = "tcp"
port = "7946"
source_ips = [local.app_subnet_cidr]
}
# Docker Swarm node discovery (UDP)
rule {
direction = "in"
protocol = "udp"
port = "7946"
source_ips = [local.app_subnet_cidr]
}
# Docker Swarm VXLAN overlay
rule {
direction = "in"
protocol = "udp"
port = "4789"
source_ips = [local.app_subnet_cidr]
}
# Vault API private only, never public
rule {
direction = "in"
protocol = "tcp"
port = "8200"
source_ips = [local.app_subnet_cidr]
}
# Redis
rule {
direction = "in"
protocol = "tcp"
port = "6379"
source_ips = [local.app_subnet_cidr]
}
# RabbitMQ AMQP
rule {
direction = "in"
protocol = "tcp"
port = "5672"
source_ips = [local.app_subnet_cidr]
}
# RabbitMQ STOMP
rule {
direction = "in"
protocol = "tcp"
port = "61613"
source_ips = [local.app_subnet_cidr]
}
# RabbitMQ Web STOMP
rule {
direction = "in"
protocol = "tcp"
port = "15674"
source_ips = [local.app_subnet_cidr]
}
# RabbitMQ Management private only, SWAG arkasindan 443 uzerinden
rule {
direction = "in"
protocol = "tcp"
port = "15672"
source_ips = [local.app_subnet_cidr]
}
# APISIX Dashboard private only, SWAG arkasindan 443 uzerinden (IP kisitli)
rule {
direction = "in"
protocol = "tcp"
port = "9000"
source_ips = [local.app_subnet_cidr]
}
# APISIX Admin API sadece Docker overlay icinden Dashboard erisir
rule {
direction = "in"
protocol = "tcp"
port = "9180"
source_ips = [local.app_subnet_cidr]
}
# Prometheus private only, Grafana Docker overlay uzerinden erisir
rule {
direction = "in"
protocol = "tcp"
port = "9090"
source_ips = [local.app_subnet_cidr]
}
# Grafana private only, SWAG arkasindan 443 uzerinden (IP kisitli)
rule {
direction = "in"
protocol = "tcp"
port = "3000"
source_ips = [local.app_subnet_cidr]
}
labels = {
environment = local.environment
role = "swarm"
}
}
# DB node firewall SSH + DB ports from app/swarm subnet only
resource "hcloud_firewall" "db" {
name = "${local.name_prefix}-firewall-db"
# SSH admin CIDRs only
rule {
direction = "in"
protocol = "tcp"
port = "22"
source_ips = var.admin_allowed_cidrs
}
# PostgreSQL from app/swarm subnet
rule {
direction = "in"
protocol = "tcp"
port = "5432"
source_ips = [local.app_subnet_cidr]
}
# PostgreSQL replication within DB subnet
rule {
direction = "in"
protocol = "tcp"
port = "5432"
source_ips = [local.db_subnet_cidr]
}
# MongoDB from app/swarm subnet
rule {
direction = "in"
protocol = "tcp"
port = "27017"
source_ips = [local.app_subnet_cidr]
}
# MongoDB replica set internal traffic
rule {
direction = "in"
protocol = "tcp"
port = "27017"
source_ips = [local.db_subnet_cidr]
}
labels = {
environment = local.environment
role = "db"
}
}

View File

@ -0,0 +1,18 @@
resource "hcloud_floating_ip" "app" {
name = "${local.name_prefix}-app-fip"
type = "ipv4"
home_location = var.location
description = "Floating IP for ${local.environment} app entry point"
labels = {
environment = local.environment
role = "app"
}
}
# Floating IP, iklim-app-01'e atanir (primary node).
# Failover gerekirse manuel veya otomasyon ile baska app node'una tasinabilir.
resource "hcloud_floating_ip_assignment" "app" {
floating_ip_id = hcloud_floating_ip.app.id
server_id = hcloud_server.swarm["iklim-app-01"].id
}

View File

@ -0,0 +1,22 @@
locals {
environment = "prod"
hcloud_project = "iklim_prod"
name_prefix = "iklim-prod"
swarm_private_ips = {
"iklim-app-01" = "10.20.10.11"
"iklim-app-02" = "10.20.10.12"
"iklim-app-03" = "10.20.10.13"
}
db_private_ips = {
"iklim-db-01" = "10.20.20.11"
"iklim-db-02" = "10.20.20.12"
"iklim-db-03" = "10.20.20.13"
}
network_zone = "eu-central"
network_cidr = "10.20.0.0/16"
app_subnet_cidr = "10.20.10.0/24"
db_subnet_cidr = "10.20.20.0/24"
}

View File

@ -0,0 +1,22 @@
resource "hcloud_network" "main" {
name = "${local.name_prefix}-net"
ip_range = local.network_cidr
labels = {
environment = local.environment
}
}
resource "hcloud_network_subnet" "app" {
network_id = hcloud_network.main.id
type = "cloud"
network_zone = local.network_zone
ip_range = local.app_subnet_cidr
}
resource "hcloud_network_subnet" "db" {
network_id = hcloud_network.main.id
type = "cloud"
network_zone = local.network_zone
ip_range = local.db_subnet_cidr
}

View File

@ -0,0 +1,52 @@
output "ansible_inventory_yaml" {
description = "Ansible inventory in YAML format — write to ansible/inventory/generated/prod.yml"
sensitive = false
value = yamlencode({
all = {
children = {
swarm = {
hosts = {
for name, server in hcloud_server.swarm : name => {
ansible_host = server.ipv4_address
private_ip = local.swarm_private_ips[name]
ansible_user = "root"
}
}
}
db = {
hosts = {
for name, server in hcloud_server.db : name => {
ansible_host = server.ipv4_address
private_ip = local.db_private_ips[name]
ansible_user = "root"
}
}
}
}
}
})
}
output "prod_private_ips" {
description = "Private IPs assigned to prod nodes"
sensitive = false
value = {
swarm = local.swarm_private_ips
db = local.db_private_ips
}
}
output "prod_public_ips" {
description = "Public IPv4 addresses of prod nodes"
sensitive = false
value = {
swarm = { for name, server in hcloud_server.swarm : name => server.ipv4_address }
db = { for name, server in hcloud_server.db : name => server.ipv4_address }
}
}
output "prod_floating_ip" {
description = "Floating IP for prod swarm entry point — point DNS A records here"
sensitive = false
value = hcloud_floating_ip.app.ip_address
}

View File

@ -0,0 +1,19 @@
resource "hcloud_placement_group" "app_spread" {
name = "${local.name_prefix}-app-spread"
type = "spread"
labels = {
environment = local.environment
role = "app"
}
}
resource "hcloud_placement_group" "db_spread" {
name = "${local.name_prefix}-db-spread"
type = "spread"
labels = {
environment = local.environment
role = "db"
}
}

View File

@ -0,0 +1,5 @@
# Hetzner Cloud Project: iklim_prod
# Token bu projeye ait olmalidir.
provider "hcloud" {
token = var.hcloud_token
}

View File

@ -0,0 +1,76 @@
resource "hcloud_ssh_key" "admin" {
name = "${local.name_prefix}-admin-key"
public_key = file(var.admin_ssh_public_key_path)
}
resource "hcloud_server" "swarm" {
for_each = local.swarm_private_ips
name = each.key
server_type = var.server_type_swarm
image = var.image
location = var.location
ssh_keys = [hcloud_ssh_key.admin.id]
placement_group_id = hcloud_placement_group.app_spread.id
labels = {
environment = local.environment
role = "swarm"
type = "service"
}
lifecycle {
prevent_destroy = true
}
}
resource "hcloud_server" "db" {
for_each = local.db_private_ips
name = each.key
server_type = var.server_type_db
image = var.image
location = var.location
ssh_keys = [hcloud_ssh_key.admin.id]
placement_group_id = hcloud_placement_group.db_spread.id
labels = {
environment = local.environment
role = "db"
type = "db"
}
lifecycle {
prevent_destroy = true
}
}
resource "hcloud_server_network" "swarm" {
for_each = local.swarm_private_ips
server_id = hcloud_server.swarm[each.key].id
network_id = hcloud_network.main.id
ip = each.value
depends_on = [hcloud_network_subnet.app]
}
resource "hcloud_server_network" "db" {
for_each = local.db_private_ips
server_id = hcloud_server.db[each.key].id
network_id = hcloud_network.main.id
ip = each.value
depends_on = [hcloud_network_subnet.db]
}
resource "hcloud_firewall_attachment" "swarm" {
firewall_id = hcloud_firewall.swarm.id
server_ids = [for s in hcloud_server.swarm : s.id]
}
resource "hcloud_firewall_attachment" "db" {
firewall_id = hcloud_firewall.db.id
server_ids = [for s in hcloud_server.db : s.id]
}

View File

@ -0,0 +1,8 @@
# Hetzner Cloud Project: iklim_prod
hcloud_token = "YOUR_HETZNER_PROD_PROJECT_API_TOKEN"
location = "fsn1"
image = "rocky-10"
server_type_swarm = "cx42"
server_type_db = "cx52"
admin_ssh_public_key_path = "~/.ssh/id_ed25519.pub"
admin_allowed_cidrs = ["1.2.3.4/32", "5.6.7.8/32"]

View File

@ -0,0 +1,40 @@
variable "hcloud_token" {
type = string
sensitive = true
description = "Hetzner Cloud API token for the prod project"
}
variable "location" {
type = string
default = "fsn1"
description = "Hetzner Cloud datacenter location"
}
variable "image" {
type = string
default = "rocky-10"
description = "Server image"
}
variable "server_type_swarm" {
type = string
default = "cx42"
description = "Hetzner server type for Swarm nodes"
}
variable "server_type_db" {
type = string
default = "cx52"
description = "Hetzner server type for DB nodes"
}
variable "admin_ssh_public_key_path" {
type = string
default = "~/.ssh/id_ed25519.pub"
description = "Path to the admin SSH public key file"
}
variable "admin_allowed_cidrs" {
type = list(string)
description = "CIDR list for admin SSH access"
}

View File

@ -0,0 +1,10 @@
terraform {
required_version = ">= 1.6"
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
version = "~> 1.49"
}
}
}

View File

@ -1,6 +1,6 @@
# Swarm node firewall public HTTP/HTTPS + private infra services
resource "hcloud_firewall" "swarm" {
name = "${local.name_prefix}-firewall-swarm"
name = "${local.name_prefix}-firewall-app"
# SSH admin CIDRs only
rule {
@ -98,40 +98,48 @@ resource "hcloud_firewall" "swarm" {
source_ips = [local.app_subnet_cidr]
}
# RabbitMQ Management admin CIDRs only
# RabbitMQ Management private only, SWAG arkasindan 443 uzerinden erisim
rule {
direction = "in"
protocol = "tcp"
port = "15672"
source_ips = var.admin_allowed_cidrs
source_ips = [local.app_subnet_cidr]
}
# APISIX Admin API admin CIDRs only
# APISIX Dashboard private only, SWAG arkasindan 443 uzerinden (IP kisitli)
rule {
direction = "in"
protocol = "tcp"
port = "9000"
source_ips = [local.app_subnet_cidr]
}
# APISIX Admin API sadece Docker overlay icinden Dashboard erisir, insan erisimi gerekmez
rule {
direction = "in"
protocol = "tcp"
port = "9180"
source_ips = var.admin_allowed_cidrs
source_ips = [local.app_subnet_cidr]
}
# Prometheus admin CIDRs only
# Prometheus private only, SWAG arkasindan 443 uzerinden erisim
rule {
direction = "in"
protocol = "tcp"
port = "9090"
source_ips = var.admin_allowed_cidrs
source_ips = [local.app_subnet_cidr]
}
# Grafana admin CIDRs only
# Grafana private only, SWAG arkasindan 443 uzerinden erisim
rule {
direction = "in"
protocol = "tcp"
port = "3000"
source_ips = var.admin_allowed_cidrs
source_ips = [local.app_subnet_cidr]
}
labels = {
environment = var.environment
environment = local.environment
role = "swarm"
}
}
@ -165,7 +173,7 @@ resource "hcloud_firewall" "db" {
}
labels = {
environment = var.environment
environment = local.environment
role = "db"
}
}

View File

@ -0,0 +1,16 @@
resource "hcloud_floating_ip" "app" {
name = "${local.name_prefix}-app-fip"
type = "ipv4"
home_location = var.location
description = "Floating IP for ${local.environment} app entry point"
labels = {
environment = local.environment
role = "app"
}
}
resource "hcloud_floating_ip_assignment" "app" {
floating_ip_id = hcloud_floating_ip.app.id
server_id = hcloud_server.swarm.id
}

View File

@ -1,9 +1,12 @@
locals {
name_prefix = "iklim-${var.environment}"
environment = "test"
hcloud_project = "iklim_test"
name_prefix = "iklim-test"
swarm_private_ip = "10.10.10.11"
db_private_ip = "10.10.20.11"
network_zone = "eu-central"
network_cidr = "10.10.0.0/16"
app_subnet_cidr = "10.10.10.0/24"
db_subnet_cidr = "10.10.20.0/24"

View File

@ -3,20 +3,20 @@ resource "hcloud_network" "main" {
ip_range = local.network_cidr
labels = {
environment = var.environment
environment = local.environment
}
}
resource "hcloud_network_subnet" "app" {
network_id = hcloud_network.main.id
type = "cloud"
network_zone = "eu-central"
network_zone = local.network_zone
ip_range = local.app_subnet_cidr
}
resource "hcloud_network_subnet" "db" {
network_id = hcloud_network.main.id
type = "cloud"
network_zone = "eu-central"
network_zone = local.network_zone
ip_range = local.db_subnet_cidr
}

View File

@ -44,3 +44,9 @@ output "test_public_ips" {
db_01 = hcloud_server.db.ipv4_address
}
}
output "test_floating_ip" {
description = "Floating IP for test app entry point — point DNS A records here"
sensitive = false
value = hcloud_floating_ip.app.ip_address
}

View File

@ -3,6 +3,6 @@ resource "hcloud_placement_group" "test_spread" {
type = "spread"
labels = {
environment = var.environment
environment = local.environment
}
}

View File

@ -1,3 +1,5 @@
# Hetzner Cloud Project: iklim_test
# Token bu projeye ait olmalidir.
provider "hcloud" {
token = var.hcloud_token
}

View File

@ -4,7 +4,7 @@ resource "hcloud_ssh_key" "admin" {
}
resource "hcloud_server" "swarm" {
name = "${var.environment}-swarm-01"
name = "iklim-app-01"
server_type = var.server_type_swarm
image = var.image
location = var.location
@ -12,7 +12,7 @@ resource "hcloud_server" "swarm" {
placement_group_id = hcloud_placement_group.test_spread.id
labels = {
environment = var.environment
environment = local.environment
role = "swarm"
type = "service"
}
@ -25,7 +25,7 @@ resource "hcloud_server" "swarm" {
}
resource "hcloud_server" "db" {
name = "${var.environment}-db-01"
name = "iklim-db-01"
server_type = var.server_type_db
image = var.image
location = var.location
@ -33,7 +33,7 @@ resource "hcloud_server" "db" {
placement_group_id = hcloud_placement_group.test_spread.id
labels = {
environment = var.environment
environment = local.environment
role = "db"
type = "db"
}

View File

@ -1,8 +1,8 @@
# Hetzner Cloud Project: iklim_test
hcloud_token = "YOUR_HETZNER_TEST_PROJECT_API_TOKEN"
environment = "test"
location = "fsn1"
image = "ubuntu-24.04"
image = "rocky-10"
server_type_swarm = "cx32"
server_type_db = "cx42"
admin_ssh_public_key_path = "~/.ssh/id_ed25519.pub"
admin_allowed_cidrs = ["X.X.X.X/32"]
admin_allowed_cidrs = ["1.2.3.4/32", "5.6.7.8/32"]

View File

@ -4,12 +4,6 @@ variable "hcloud_token" {
description = "Hetzner Cloud API token for the test project"
}
variable "environment" {
type = string
default = "test"
description = "Environment name prefix for all resources"
}
variable "location" {
type = string
default = "fsn1"
@ -18,7 +12,7 @@ variable "location" {
variable "image" {
type = string
default = "ubuntu-24.04"
default = "rocky-10"
description = "Server image"
}