【.NET Aspire入門】第5回:デプロイメントとスケーリング
.NETAspireKubernetesDockerAzureスケーリング
はじめに
開発環境で動作する.NET Aspireアプリケーションを本番環境にデプロイすることは、クラウドネイティブ開発の重要なステップです。前回は観測可能性とモニタリングについて学びました。今回は、Docker、Kubernetes、Azure Container Appsへのデプロイメント方法と、負荷に応じた自動スケーリングの実装について解説します。
.NET Aspireは、ローカル開発から本番デプロイまでシームレスな体験を提供し、インフラストラクチャの複雑さを抽象化します。
デプロイメントアーキテクチャ
マニフェストの生成
# .NET Aspireマニフェストの生成
dotnet run --project AppHost/AppHost.csproj \
--publisher manifest \
--output-path ./manifest.json
生成されるマニフェスト例:
{
"resources": {
"cache": {
"type": "container.v0",
"image": "redis:7-alpine",
"volumes": [
{
"name": "cache-data",
"target": "/data",
"readOnly": false
}
]
},
"sql": {
"type": "container.v0",
"image": "mcr.microsoft.com/mssql/server:2022-latest",
"env": {
"ACCEPT_EULA": "Y",
"SA_PASSWORD": "{sql.password}"
},
"bindings": {
"tcp": {
"scheme": "tcp",
"protocol": "tcp",
"transport": "tcp",
"containerPort": 1433
}
}
},
"api": {
"type": "project.v0",
"path": "../Api/Api.csproj",
"env": {
"ConnectionStrings__cache": "{cache.connectionString}",
"ConnectionStrings__sql": "{sql.connectionString}",
"OTEL_EXPORTER_OTLP_ENDPOINT": "http://otel-collector:4317"
},
"bindings": {
"http": {
"scheme": "http",
"protocol": "tcp",
"transport": "http",
"containerPort": 8080
}
}
}
}
}
Dockerコンテナへのデプロイ
Dockerfileの最適化
# Api/Dockerfile
# ビルドステージ
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /src
# 依存関係のキャッシュ最適化
COPY ["Api/Api.csproj", "Api/"]
COPY ["ServiceDefaults/ServiceDefaults.csproj", "ServiceDefaults/"]
RUN dotnet restore "Api/Api.csproj"
# ソースコードのコピーとビルド
COPY . .
WORKDIR "/src/Api"
RUN dotnet build "Api.csproj" -c Release -o /app/build
# パブリッシュステージ
FROM build AS publish
RUN dotnet publish "Api.csproj" -c Release -o /app/publish \
/p:UseAppHost=false \
/p:PublishSingleFile=false \
/p:PublishTrimmed=true \
/p:PublishReadyToRun=true
# 実行ステージ
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine AS final
WORKDIR /app
# 非rootユーザーの設定
RUN adduser -D -u 1000 appuser
USER appuser
# ヘルスチェックの追加
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:8080/health || exit 1
COPY --from=publish /app/publish .
EXPOSE 8080
ENV ASPNETCORE_URLS=http://+:8080
ENTRYPOINT ["dotnet", "Api.dll"]
Docker Composeへの変換
# docker-compose.yml
version: '3.8'
services:
redis:
image: redis:7-alpine
container_name: cache
volumes:
- redis-data:/data
networks:
- aspire-network
restart: unless-stopped
sql:
image: mcr.microsoft.com/mssql/server:2022-latest
container_name: sql
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=${SQL_PASSWORD:-YourStrong@Password}
volumes:
- sql-data:/var/opt/mssql
networks:
- aspire-network
restart: unless-stopped
healthcheck:
test: /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P "$${SA_PASSWORD}" -Q "SELECT 1" || exit 1
interval: 10s
timeout: 3s
retries: 10
start_period: 10s
api:
build:
context: .
dockerfile: Api/Dockerfile
container_name: api
environment:
- ConnectionStrings__cache=redis:6379
- ConnectionStrings__sql=Server=sql;Database=ApiDb;User Id=sa;Password=${SQL_PASSWORD:-YourStrong@Password};TrustServerCertificate=true
- OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4317
ports:
- "5000:8080"
networks:
- aspire-network
depends_on:
sql:
condition: service_healthy
redis:
condition: service_started
restart: unless-stopped
# 監視スタック
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
container_name: otel-collector
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
networks:
- aspire-network
restart: unless-stopped
prometheus:
image: prom/prometheus:latest
container_name: prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--web.enable-lifecycle'
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus-data:/prometheus
ports:
- "9090:9090"
networks:
- aspire-network
restart: unless-stopped
grafana:
image: grafana/grafana:latest
container_name: grafana
environment:
- GF_SECURITY_ADMIN_USER=${GRAFANA_USER:-admin}
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD:-admin}
volumes:
- grafana-data:/var/lib/grafana
- ./grafana-provisioning:/etc/grafana/provisioning
ports:
- "3000:3000"
networks:
- aspire-network
depends_on:
- prometheus
restart: unless-stopped
volumes:
redis-data:
sql-data:
prometheus-data:
grafana-data:
networks:
aspire-network:
driver: bridge
Kubernetesへのデプロイ
Aspire to Kubernetes
# Aspirateツールのインストール
dotnet tool install -g aspirate
# Kubernetesマニフェストの生成
aspirate generate --project AppHost/AppHost.csproj --output k8s/
生成されるKubernetesマニフェスト
# k8s/api-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
labels:
app: api
app.kubernetes.io/name: api
app.kubernetes.io/part-of: myapp
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: myregistry.azurecr.io/api:latest
ports:
- containerPort: 8080
protocol: TCP
env:
- name: ConnectionStrings__cache
valueFrom:
secretKeyRef:
name: api-secrets
key: cache-connection
- name: ConnectionStrings__sql
valueFrom:
secretKeyRef:
name: api-secrets
key: sql-connection
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://otel-collector:4317"
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health/live
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /health/ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: api
labels:
app: api
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
app: api
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: api-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: api
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
- type: Pods
value: 4
periodSeconds: 15
selectPolicy: Max
Helmチャートの作成
# helm/myapp/Chart.yaml
apiVersion: v2
name: myapp
description: A Helm chart for .NET Aspire application
type: application
version: 0.1.0
appVersion: "1.0"
# helm/myapp/values.yaml
replicaCount: 3
image:
repository: myregistry.azurecr.io/api
pullPolicy: IfNotPresent
tag: "latest"
service:
type: ClusterIP
port: 80
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rate-limit: "100"
hosts:
- host: api.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: api-tls
hosts:
- api.example.com
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80
redis:
enabled: true
auth:
enabled: true
password: "changeMe"
master:
persistence:
enabled: true
size: 8Gi
postgresql:
enabled: true
auth:
postgresPassword: "changeMe"
database: "apidb"
primary:
persistence:
enabled: true
size: 10Gi
Azure Container Appsへのデプロイ
Azure Developer CLI (azd) の使用
# azdの初期化
azd init
# 環境の作成
azd env new production
# インフラストラクチャのプロビジョニングとデプロイ
azd up
azure.yaml の設定
# azure.yaml
name: myapp
metadata:
template: aspire@v0.1
services:
app:
project: ./AppHost
language: dotnet
host: containerapp
hooks:
preprovision:
shell: pwsh
run: ./scripts/preprovision.ps1
postprovision:
shell: pwsh
run: ./scripts/postprovision.ps1
Bicepテンプレート
// infra/main.bicep
targetScope = 'subscription'
@minLength(1)
@maxLength(64)
@description('Name of the environment')
param environmentName string
@minLength(1)
@description('Primary location for all resources')
param location string
var tags = {
'azd-env-name': environmentName
}
resource rg 'Microsoft.Resources/resourceGroups@2021-04-01' = {
name: 'rg-${environmentName}'
location: location
tags: tags
}
module containerApps './core/host/container-apps.bicep' = {
name: 'container-apps'
scope: rg
params: {
name: 'app'
location: location
tags: tags
}
}
// Container Apps Environment
module containerAppsEnvironment './core/host/container-apps-environment.bicep' = {
name: 'container-apps-environment'
scope: rg
params: {
name: 'cae-${environmentName}'
location: location
tags: tags
logAnalyticsWorkspaceId: monitoring.outputs.logAnalyticsWorkspaceId
}
}
// Redis Cache
module redis './core/database/redis.bicep' = {
name: 'redis'
scope: rg
params: {
name: 'redis-${environmentName}'
location: location
tags: tags
}
}
// SQL Database
module sql './core/database/sql-server.bicep' = {
name: 'sql'
scope: rg
params: {
name: 'sql-${environmentName}'
location: location
tags: tags
administratorLogin: 'sqladmin'
administratorLoginPassword: sqlPassword
}
}
// Monitoring
module monitoring './core/monitor/monitoring.bicep' = {
name: 'monitoring'
scope: rg
params: {
name: 'log-${environmentName}'
location: location
tags: tags
}
}
自動スケーリングの実装
KEDA (Kubernetes Event-driven Autoscaling)
# k8s/keda-scaler.yaml
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: api-scaler
spec:
scaleTargetRef:
name: api
minReplicaCount: 2
maxReplicaCount: 20
pollingInterval: 30
cooldownPeriod: 300
triggers:
# HTTPリクエスト数に基づくスケーリング
- type: prometheus
metadata:
serverAddress: http://prometheus:9090
metricName: http_requests_per_second
query: sum(rate(http_server_request_duration_seconds_count[1m]))
threshold: '100'
# RabbitMQキューの長さに基づくスケーリング
- type: rabbitmq
metadata:
host: amqp://rabbitmq:5672
queueName: orders
queueLength: '10'
# カスタムメトリクスに基づくスケーリング
- type: prometheus
metadata:
serverAddress: http://prometheus:9090
metricName: pending_orders
query: orders_pending
threshold: '50'
アプリケーションレベルのスケーリング戦略
// Services/ScalingMetricsService.cs
public class ScalingMetricsService : BackgroundService
{
private readonly IServiceProvider _serviceProvider;
private readonly ILogger<ScalingMetricsService> _logger;
private readonly Gauge<double> _cpuUsage;
private readonly Gauge<double> _memoryUsage;
private readonly Gauge<int> _activeConnections;
private readonly Gauge<int> _queueDepth;
public ScalingMetricsService(
IServiceProvider serviceProvider,
IMeterFactory meterFactory,
ILogger<ScalingMetricsService> logger)
{
_serviceProvider = serviceProvider;
_logger = logger;
var meter = meterFactory.Create("Scaling");
_cpuUsage = meter.CreateGauge<double>(
"app_cpu_usage",
unit: "%",
description: "Current CPU usage percentage");
_memoryUsage = meter.CreateGauge<double>(
"app_memory_usage",
unit: "MB",
description: "Current memory usage in MB");
_activeConnections = meter.CreateGauge<int>(
"app_active_connections",
description: "Number of active connections");
_queueDepth = meter.CreateGauge<int>(
"app_queue_depth",
description: "Number of messages in queue");
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
try
{
// CPU使用率の取得
var process = Process.GetCurrentProcess();
var cpuUsage = await GetCpuUsageAsync();
_cpuUsage.Record(cpuUsage);
// メモリ使用量の取得
var memoryUsage = process.WorkingSet64 / (1024.0 * 1024.0);
_memoryUsage.Record(memoryUsage);
// アクティブな接続数の取得
using var scope = _serviceProvider.CreateScope();
var connectionManager = scope.ServiceProvider
.GetRequiredService<IConnectionManager>();
_activeConnections.Record(connectionManager.GetActiveConnectionCount());
// キューの深さの取得
var queueMonitor = scope.ServiceProvider
.GetRequiredService<IQueueMonitor>();
_queueDepth.Record(await queueMonitor.GetQueueDepthAsync());
// スケーリングの推奨事項をログ
if (cpuUsage > 80)
{
_logger.LogWarning("High CPU usage detected: {CpuUsage}%", cpuUsage);
}
if (memoryUsage > 1024) // 1GB
{
_logger.LogWarning("High memory usage detected: {MemoryUsage}MB", memoryUsage);
}
await Task.Delay(TimeSpan.FromSeconds(30), stoppingToken);
}
catch (Exception ex)
{
_logger.LogError(ex, "Error collecting scaling metrics");
await Task.Delay(TimeSpan.FromMinutes(1), stoppingToken);
}
}
}
private async Task<double> GetCpuUsageAsync()
{
var startTime = DateTime.UtcNow;
var startCpuUsage = Process.GetCurrentProcess().TotalProcessorTime;
await Task.Delay(1000);
var endTime = DateTime.UtcNow;
var endCpuUsage = Process.GetCurrentProcess().TotalProcessorTime;
var cpuUsedMs = (endCpuUsage - startCpuUsage).TotalMilliseconds;
var totalMsPassed = (endTime - startTime).TotalMilliseconds;
var cpuUsageTotal = cpuUsedMs / (Environment.ProcessorCount * totalMsPassed);
return cpuUsageTotal * 100;
}
}
ブルー・グリーンデプロイメント
Kubernetesでの実装
# k8s/blue-green-deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: api-active
spec:
selector:
app: api
version: blue # または green
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-blue
spec:
replicas: 3
selector:
matchLabels:
app: api
version: blue
template:
metadata:
labels:
app: api
version: blue
spec:
containers:
- name: api
image: myregistry.azurecr.io/api:v1.0
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-green
spec:
replicas: 3
selector:
matchLabels:
app: api
version: green
template:
metadata:
labels:
app: api
version: green
spec:
containers:
- name: api
image: myregistry.azurecr.io/api:v1.1
ports:
- containerPort: 8080
切り替えスクリプト
#!/bin/bash
# switch-deployment.sh
CURRENT_VERSION=$(kubectl get service api-active -o jsonpath='{.spec.selector.version}')
echo "Current version: $CURRENT_VERSION"
if [ "$CURRENT_VERSION" == "blue" ]; then
NEW_VERSION="green"
else
NEW_VERSION="blue"
fi
echo "Switching to version: $NEW_VERSION"
# ヘルスチェック
kubectl wait --for=condition=ready pod -l app=api,version=$NEW_VERSION --timeout=300s
if [ $? -eq 0 ]; then
# トラフィックの切り替え
kubectl patch service api-active -p '{"spec":{"selector":{"version":"'$NEW_VERSION'"}}}'
echo "Successfully switched to $NEW_VERSION"
# 古いバージョンのスケールダウン(オプション)
# kubectl scale deployment api-$CURRENT_VERSION --replicas=0
else
echo "Health check failed for $NEW_VERSION"
exit 1
fi
障害復旧とバックアップ
状態の永続化
// Services/StateBackupService.cs
public class StateBackupService : BackgroundService
{
private readonly IServiceProvider _serviceProvider;
private readonly ILogger<StateBackupService> _logger;
private readonly IConfiguration _configuration;
public StateBackupService(
IServiceProvider serviceProvider,
ILogger<StateBackupService> logger,
IConfiguration configuration)
{
_serviceProvider = serviceProvider;
_logger = logger;
_configuration = configuration;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
var backupInterval = TimeSpan.FromHours(1);
while (!stoppingToken.IsCancellationRequested)
{
try
{
await PerformBackupAsync();
await Task.Delay(backupInterval, stoppingToken);
}
catch (Exception ex)
{
_logger.LogError(ex, "Backup failed");
await Task.Delay(TimeSpan.FromMinutes(5), stoppingToken);
}
}
}
private async Task PerformBackupAsync()
{
using var scope = _serviceProvider.CreateScope();
var dbContext = scope.ServiceProvider.GetRequiredService<AppDbContext>();
var blobService = scope.ServiceProvider.GetRequiredService<IBlobStorageService>();
var backupId = $"backup_{DateTime.UtcNow:yyyyMMdd_HHmmss}";
// データベースのエクスポート
var data = await ExportDatabaseAsync(dbContext);
// Blob Storageへのアップロード
await blobService.UploadAsync(
containerName: "backups",
blobName: $"{backupId}/database.json",
content: data);
// メタデータの保存
var metadata = new BackupMetadata
{
Id = backupId,
Timestamp = DateTime.UtcNow,
Version = Assembly.GetExecutingAssembly().GetName().Version?.ToString(),
Size = Encoding.UTF8.GetByteCount(data)
};
await blobService.UploadAsync(
containerName: "backups",
blobName: $"{backupId}/metadata.json",
content: JsonSerializer.Serialize(metadata));
_logger.LogInformation("Backup completed: {BackupId}", backupId);
// 古いバックアップのクリーンアップ
await CleanupOldBackupsAsync(blobService);
}
private async Task<string> ExportDatabaseAsync(AppDbContext dbContext)
{
var export = new
{
ExportDate = DateTime.UtcNow,
Orders = await dbContext.Orders.ToListAsync(),
Customers = await dbContext.Customers.ToListAsync(),
Products = await dbContext.Products.ToListAsync()
};
return JsonSerializer.Serialize(export, new JsonSerializerOptions
{
WriteIndented = true
});
}
private async Task CleanupOldBackupsAsync(IBlobStorageService blobService)
{
var retentionDays = _configuration.GetValue<int>("Backup:RetentionDays", 7);
var cutoffDate = DateTime.UtcNow.AddDays(-retentionDays);
var blobs = await blobService.ListBlobsAsync("backups");
var oldBackups = blobs
.Where(b => b.Properties.CreatedOn < cutoffDate)
.ToList();
foreach (var blob in oldBackups)
{
await blobService.DeleteAsync("backups", blob.Name);
_logger.LogInformation("Deleted old backup: {BackupName}", blob.Name);
}
}
}
まとめ
今回は、.NET Aspireアプリケーションのデプロイメントとスケーリングについて学びました。重要なポイント:
- 柔軟なデプロイメント: Docker、Kubernetes、Azure Container Appsへの対応
- 自動スケーリング: 負荷に応じた動的なリソース調整
- ブルー・グリーンデプロイメント: ダウンタイムなしの更新
- KEDAの活用: イベント駆動型の高度なスケーリング
- 障害復旧: バックアップとリストアの自動化
次回は、本番環境でのベストプラクティスと注意点について解説します。
次回予告:「第6回:ベストプラクティスと本番環境での考慮事項」では、セキュリティ、パフォーマンス最適化、コスト管理など、本番環境で.NET Aspireアプリケーションを運用する際の重要なポイントを解説します。