マイクロサービスアーキテクチャとKubernetes:エンタープライズ向け完全ガイド
マイクロサービスアーキテクチャとKubernetes:エンタープライズ向け完全ガイド
はじめに
モノリシックなアプリケーションからマイクロサービスへの移行は、多くの企業にとって重要な技術的転換点となっています。本記事では、Kubernetes を基盤としたマイクロサービスアーキテクチャの設計、実装、運用について、実践的な観点から詳しく解説します。
マイクロサービスアーキテクチャの基本原則
マイクロサービスとは
マイクロサービスは、小さく独立したサービスの集合体としてアプリケーションを構築するアーキテクチャパターンです。
主な特徴
- 単一責任の原則: 各サービスは特定のビジネス機能に焦点
- 独立したデプロイメント: サービスごとに個別にデプロイ可能
- 技術的多様性: 各サービスで最適な技術スタックを選択
- 障害の分離: 一つのサービスの障害が全体に波及しない
Kubernetes アーキテクチャ
コア概念
# Pod: 最小のデプロイメント単位
apiVersion: v1
kind: Pod
metadata:
name: user-service
labels:
app: user-service
spec:
containers:
- name: user-service
image: myregistry/user-service:1.0.0
ports:
- containerPort: 8080
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
サービス定義
# Service: ネットワークエンドポイントの抽象化
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
selector:
app: user-service
ports:
- port: 80
targetPort: 8080
type: ClusterIP
マイクロサービスの実装
1. API Gateway パターン
// api-gateway/src/index.ts
import express from 'express';
import httpProxy from 'http-proxy-middleware';
import rateLimit from 'express-rate-limit';
import jwt from 'jsonwebtoken';
const app = express();
// レート制限
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15分
max: 100 // リクエスト数
});
app.use(limiter);
// 認証ミドルウェア
const authenticate = (req, res, next) => {
const token = req.headers.authorization?.split(' ')[1];
if (!token) {
return res.status(401).json({ error: 'Unauthorized' });
}
try {
const decoded = jwt.verify(token, process.env.JWT_SECRET);
req.user = decoded;
next();
} catch (error) {
return res.status(401).json({ error: 'Invalid token' });
}
};
// サービスプロキシ設定
const services = {
'/api/users': {
target: 'http://user-service:8080',
changeOrigin: true
},
'/api/orders': {
target: 'http://order-service:8080',
changeOrigin: true
},
'/api/products': {
target: 'http://product-service:8080',
changeOrigin: true
}
};
// プロキシ設定
Object.keys(services).forEach(path => {
app.use(
path,
authenticate,
httpProxy.createProxyMiddleware(services[path])
);
});
app.listen(3000, () => {
console.log('API Gateway running on port 3000');
});
2. サービス間通信
// shared/src/messaging/event-bus.ts
import { EventEmitter } from 'events';
import amqp from 'amqplib';
export class EventBus {
private connection: amqp.Connection;
private channel: amqp.Channel;
async connect(url: string) {
this.connection = await amqp.connect(url);
this.channel = await this.connection.createChannel();
}
async publish(exchange: string, routingKey: string, data: any) {
await this.channel.assertExchange(exchange, 'topic', { durable: true });
const message = Buffer.from(JSON.stringify({
timestamp: new Date().toISOString(),
data
}));
this.channel.publish(exchange, routingKey, message, {
persistent: true
});
}
async subscribe(
exchange: string,
pattern: string,
handler: (msg: any) => Promise<void>
) {
await this.channel.assertExchange(exchange, 'topic', { durable: true });
const q = await this.channel.assertQueue('', { exclusive: true });
await this.channel.bindQueue(q.queue, exchange, pattern);
this.channel.consume(q.queue, async (msg) => {
if (msg) {
try {
const content = JSON.parse(msg.content.toString());
await handler(content);
this.channel.ack(msg);
} catch (error) {
console.error('Message processing failed:', error);
this.channel.nack(msg, false, false);
}
}
});
}
}
// 使用例: Order Service
const eventBus = new EventBus();
await eventBus.connect(process.env.RABBITMQ_URL);
// 注文作成時のイベント発行
await eventBus.publish('orders', 'order.created', {
orderId: '12345',
userId: 'user-123',
items: [{ productId: 'prod-1', quantity: 2 }],
totalAmount: 5000
});
// Inventory Service でのイベント購読
await eventBus.subscribe('orders', 'order.created', async (msg) => {
const { data } = msg;
// 在庫を減らす処理
for (const item of data.items) {
await updateInventory(item.productId, -item.quantity);
}
});
3. サービスメッシュ(Istio)
# istio-service-mesh.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: user-service
spec:
hosts:
- user-service
http:
- match:
- headers:
version:
exact: v2
route:
- destination:
host: user-service
subset: v2
weight: 100
- route:
- destination:
host: user-service
subset: v1
weight: 90
- destination:
host: user-service
subset: v2
weight: 10
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: user-service
spec:
host: user-service
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
http1MaxPendingRequests: 50
http2MaxRequests: 100
loadBalancer:
simple: LEAST_REQUEST
outlierDetection:
consecutiveErrors: 5
interval: 30s
baseEjectionTime: 30s
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Kubernetes でのデプロイメント
1. Deployment 戦略
# blue-green-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service-blue
spec:
replicas: 3
selector:
matchLabels:
app: user-service
version: blue
template:
metadata:
labels:
app: user-service
version: blue
spec:
containers:
- name: user-service
image: myregistry/user-service:1.0.0
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: user-service-secrets
key: database-url
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
---
# Canary Deployment
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: user-service
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: user-service
service:
port: 80
targetPort: 8080
analysis:
interval: 1m
threshold: 5
maxWeight: 50
stepWeight: 10
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 1m
2. ConfigMap と Secret
# config-management.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: user-service-config
data:
application.yaml: |
server:
port: 8080
logging:
level: info
cache:
ttl: 3600
---
apiVersion: v1
kind: Secret
metadata:
name: user-service-secrets
type: Opaque
stringData:
database-url: "postgresql://user:pass@postgres:5432/userdb"
jwt-secret: "your-super-secret-key"
api-key: "external-service-api-key"
3. Horizontal Pod Autoscaler
# autoscaling.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: user-service-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: user-service
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
- type: Pods
pods:
metric:
name: http_requests_per_second
target:
type: AverageValue
averageValue: "1000"
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
- type: Pods
value: 4
periodSeconds: 15
selectPolicy: Max
監視とロギング
1. Prometheus メトリクス
// metrics/prometheus.ts
import { Registry, Counter, Histogram, Gauge } from 'prom-client';
const register = new Registry();
// カスタムメトリクス
export const httpRequestDuration = new Histogram({
name: 'http_request_duration_seconds',
help: 'Duration of HTTP requests in seconds',
labelNames: ['method', 'route', 'status_code'],
buckets: [0.1, 0.5, 1, 2, 5],
registers: [register]
});
export const httpRequestsTotal = new Counter({
name: 'http_requests_total',
help: 'Total number of HTTP requests',
labelNames: ['method', 'route', 'status_code'],
registers: [register]
});
export const activeConnections = new Gauge({
name: 'active_connections',
help: 'Number of active connections',
registers: [register]
});
// Express ミドルウェア
export const metricsMiddleware = (req, res, next) => {
const start = Date.now();
res.on('finish', () => {
const duration = (Date.now() - start) / 1000;
const labels = {
method: req.method,
route: req.route?.path || req.path,
status_code: res.statusCode
};
httpRequestDuration.observe(labels, duration);
httpRequestsTotal.inc(labels);
});
next();
};
2. 分散トレーシング
// tracing/opentelemetry.ts
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { Resource } from '@opentelemetry/resources';
import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions';
import { JaegerExporter } from '@opentelemetry/exporter-jaeger';
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';
const provider = new NodeTracerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: 'user-service',
[SemanticResourceAttributes.SERVICE_VERSION]: '1.0.0',
}),
});
const jaegerExporter = new JaegerExporter({
endpoint: 'http://jaeger-collector:14268/api/traces',
});
provider.addSpanProcessor(new BatchSpanProcessor(jaegerExporter));
provider.register();
// トレーシング付きHTTPクライアント
import { trace } from '@opentelemetry/api';
const tracer = trace.getTracer('user-service');
export async function tracedHttpRequest(url: string, options: any) {
const span = tracer.startSpan('http-request', {
attributes: {
'http.method': options.method || 'GET',
'http.url': url,
},
});
try {
const response = await fetch(url, options);
span.setAttributes({
'http.status_code': response.status,
});
return response;
} catch (error) {
span.recordException(error);
throw error;
} finally {
span.end();
}
}
セキュリティベストプラクティス
1. Network Policy
# network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: user-service-network-policy
spec:
podSelector:
matchLabels:
app: user-service
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: api-gateway
- podSelector:
matchLabels:
app: order-service
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
- to:
- podSelector:
matchLabels:
app: redis
ports:
- protocol: TCP
port: 6379
2. RBAC 設定
# rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: user-service-sa
namespace: microservices
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-service-role
namespace: microservices
rules:
- apiGroups: [""]
resources: ["configmaps", "secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-service-rolebinding
namespace: microservices
subjects:
- kind: ServiceAccount
name: user-service-sa
namespace: microservices
roleRef:
kind: Role
name: user-service-role
apiGroup: rbac.authorization.k8s.io
CI/CD パイプライン
GitOps with ArgoCD
# argocd-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: microservices-app
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/company/microservices-config
targetRevision: HEAD
path: environments/production
destination:
server: https://kubernetes.default.svc
namespace: microservices
syncPolicy:
automated:
prune: true
selfHeal: true
allowEmpty: false
syncOptions:
- Validate=false
- CreateNamespace=true
retry:
limit: 5
backoff:
duration: 5s
factor: 2
maxDuration: 3m
GitHub Actions ワークフロー
# .github/workflows/microservice-deploy.yml
name: Microservice CI/CD
on:
push:
branches: [main]
paths:
- 'services/user-service/**'
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Container Registry
uses: docker/login-action@v2
with:
registry: ${{ secrets.REGISTRY_URL }}
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Build and Test
run: |
cd services/user-service
docker build -t test-image --target test .
docker run --rm test-image npm test
- name: Build and Push
uses: docker/build-push-action@v4
with:
context: services/user-service
push: true
tags: |
${{ secrets.REGISTRY_URL }}/user-service:${{ github.sha }}
${{ secrets.REGISTRY_URL }}/user-service:latest
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Update Kubernetes Manifests
run: |
cd k8s-config/environments/production
sed -i "s|image:.*|image: ${{ secrets.REGISTRY_URL }}/user-service:${{ github.sha }}|" user-service/deployment.yaml
- name: Commit and Push
run: |
git config --global user.name 'GitHub Actions'
git config --global user.email 'actions@github.com'
git add .
git commit -m "Update user-service to ${{ github.sha }}"
git push
トラブルシューティング
よくある問題と解決策
-
Pod が起動しない
kubectl describe pod <pod-name> kubectl logs <pod-name> --previous
-
サービス間通信の問題
kubectl exec -it <pod-name> -- nslookup <service-name> kubectl get endpoints <service-name>
-
パフォーマンス問題
kubectl top nodes kubectl top pods --all-namespaces
まとめ
マイクロサービスアーキテクチャと Kubernetes の組み合わせは、スケーラブルで柔軟なシステムを構築するための強力な基盤となります。適切な設計と運用により、高可用性と開発効率の両立が可能です。
重要なポイント:
- サービスの適切な分割と境界の定義
- 堅牢な CI/CD パイプラインの構築
- 包括的な監視とロギングの実装
- セキュリティの多層防御
エンハンスド株式会社では、マイクロサービスアーキテクチャの導入支援を行っています。お気軽にお問い合わせください。
タグ: #Microservices #Kubernetes #CloudNative #DevOps #コンテナ
執筆者: エンハンスド株式会社 クラウドアーキテクト部
公開日: 2024年12月20日