OpenText Experience Foundation CE 24.4 - Deployment Guide
5단계. PostgreSQL 구성
데이터베이스 준비
# /home/damadmin/opentext-experience-cloud/scripts/manual-scripts/database
#postgres-create-otds-database.sh 수정
NOSUPERUSER -> SUPERUSER
drop DATABASE expcloud ;
drop DATABASE expai;
drop DATABASE otds;
drop DATABASE rma;
drop user expclouduser;
drop user expaiuser;
drop user otdsuser;
drop user rmauser;
\q
#초기화
sudo -u postgres psql
q
./expcloud-setup-databases.sh -dbHost mypostgres.company.com
-dbAdminPwd
cd /home/damadmin/opentext-experience-cloud/scripts/manual-scripts/database
./expcloud-setup-databases.sh -dbHost dam-a -dbAdminPwd Admin@2024 \
-otdsUserPwd Admin@2024 -expcloudUserPwd Admin@2024 -rmaUserPwd Admin@2024 \
-expaiUserPwd Admin@2024
otds | otdsuser
otmm_db | otmmuser
rma | rmauser
7단계. Helm Chart 다운로드 및 압축 해제
helm repo add opentext [https://registry.opentext.com/helm](https://registry.opentext.com/helm) --username kwpark@penta.co.kr \
--password SystemAdmin@2024
helm repo update
helm search repo opentext
helm pull opentext/opentext-experience-cloud --version=25.2 --untar
cd opentext-mediamanagement
8단계. OpenText Docker 이미지 다운로드
~~### OpenText 지원 계정으로 로그인
docker login registry.opentext.com
docker login registry.opentext.com
docker pull registry.opentext.com/experience-ai-api:24.4.0
docker pull registry.opentext.com/experience-ai-config:24.4.0
docker pull registry.opentext.com/experience-cloud-config:24.4.0
docker pull registry.opentext.com/experience-cloud-rabbitmq:3.13.6
docker pull registry.opentext.com/otds-server:24.4.0
docker pull registry.opentext.com/rma_base:24.4.0~~
nfs-csi-driver 설치
mkdir -p ~/nfs-csi-driver
cd ~/nfs-csi-driver
curl -LO https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/v4.6.0/deploy/install-driver.sh
chmod +x install-driver.sh
./install-driver.sh
9단계. Creating the Kubernetes cluster
values.custom.yaml 예시 (초기 설정용)
# Default values for Experience Cloud helm chart.
# This is a YAML-formatted file.
# ===================================================
# BEGIN: ANCHOR tags section
# ===================================================
# ------------------------------------
# Section to enable services on Experience Cloud
# ------------------------------------
# This determines whether OTDS is deployed or not. If left blank, default value will be true
otdsEnabled: &otdsEnabled true
# This determines whether RabbitMQ is deployed or not. If left blank, default value will be true
rabbitmqEnabled: &rabbitmqEnabled true
# This determines whether Assisted Authoring is deployed or not. If left blank, default value will be true
assistedAuthoringEnabled: &assistedAuthoringEnabled false
# This determines whether RMA is deployed or not. If left blank, default value will be true
rmaEnabled: &rmaEnabled false
# This determines whether the Experience Cloud Persistence Service and Admin UI is deployed or not. If left blank, default value will be true
expcloudServiceEnabled: true
# ------------------------------------
# Experience Cloud Options
# ------------------------------------
# The name of the tenant to be create in OTDS
tenantName: &tenantName otxf
# The username for OTDS account when creating the tenant in OTDS.
tenantAdminUsername: &tenantAdminUsername admin
# The password to give the "admin" account when creating the tenant in OTDS. This password must adhere to the
# default OTDS password policy, which is:
# Minimum eight characters
# At least one lower case character
# At least one upper case character
# At least one number
# At least one symbol
tenantAdminPassword: &tenantAdminPassword Admin@2024
# The password for otexpcadmin
# When left empty, the otexpcadmin user will use the password from the tenantAdminPassword setting.
# When upgrading, please ensure that if this password has been changed that it is updated here as this is required for the upgrade.
expCloudPassword: &expCloudPassword
# Accessing deployed Services
# ------------------------------------------
# The hostname that will be used to configure ingress access to deployed services. For example: otds.company.com
# - When deploying to minikube, set this value to the empty
publicHostname: &publicHostname dam-a.ecm.my
# The URL scheme that will be used to configure secure access(https) to the deployed services.
# Set to true if using HTTPS, false otherwise
# For example: https://otds.company.com, here "useHttps" is set to true
useHttps: &useHttps true
# The configuration hostname used to configure services using a browser.
# The default value is the publicHostname -
# - when deploying to minikube or instance with reverse proxy, set this value to the reverse proxy hostname
configHostname: *publicHostname
# The Ingress Controller class value. For example, "nginx".
ingressClass: &ingressClass nginx
# The name of the secret that contains a TLS certificate for the Ingress Controller.
# For more information about creating a secret with a TLS certificate, see the Kubernetes documentation.
# If no certificate is provided, the Ingress Controller will use the default certificate. This setting
# is Optional.
ingressSecret: &ingressSecret tls-secret
# Enable ingress.
ingressEnabled: &ingressEnabled true
# The Read Write Once Storage Class to be used by any Experience Cloud service that requires it
rwoStorageClass: &rwoStorageClass standard
#rwoStorageClass: &rwoStorageClass otmm-nfs
#rwxStorageClass: &rwxStorageClass otmm-nfs
# Default image pull policy for Experience Cloud services
imagePullPolicy: &imagePullPolicy Always
# The JDBC connection string to connect to the database. The database name must match the database
# created before installing the Helm chart. The format is:
# Postgres: jdbc:postgresql://<Database server URL>:<Databse port>/<Database name>
# MSSQL: jdbc:sqlserver://<Database server URL>:<Databse port>;databaseName=<Database name>;
# Oracle (SID): jdbc:oracle:thin:@<Database server URL>:<Databse port>:<SID>
# Oracle (Service name): jdbc:oracle:thin:@<Database server URL>:<Databse port>/<Service name>
# Below are a few examples showing Postgres, MSSQL and Oracle connection strings:
# jdbc:postgresql://postgres.domain.local:5432/expcloud
# jdbc:sqlserver://mssql.domain.local:1433;databaseName=expcloud;
# jdbc:sqlserver://mssql.domain.local:1433;databaseName=expcloud;encrypt=false
# jdbc:oracle:thin:@oracle.domain.local:1521/orcl
expcloudDBUrl: "jdbc:postgresql://dam-a.ecm.my:5432/expcloud"
# The database user that was set up when creating the Experience Cloud database.
expcloudDBUser: expclouduser
# The database user password that was set up when creating the Experience Cloud database.
expcloudDBUserPassword: demo.9700
# The Allowed Origins setting used by the Experience Cloud Persistence Service
expcloudAllowedOrigins: "*"
# If you are using the OpenText Container Registry, or a private registry that requires credentials,
# provide the name of the Kubernetes secret that contains the credentials. For information about
# creating the secret, see the Kubernetes documentation. This setting is Optional.
# This is image pull secret as string
imagePullSecret: &imagePullSecret opentext-docker-secret
# If you are using the OpenText Container Registry, or a private registry that requires credentials,
# provide the name of the Kubernetes secret that contains the credentials. For information about
# creating the secret, see the Kubernetes documentation. This setting is Optional. Here is an
# example of the format:
# imagePullSecrets: &imagePullSecrets
# - name: image-pull-secret
imagePullSecrets: &imagePullSecrets
- name: opentext-docker-secret
# The timeout for Experience AI, RabbitMQ and OTDS configuration by the ECF Config pod in minutes.
# Default value is 5 minutes.
timeout: 5
# The Service Account to create and use for all Experience Cloud Foundation services.
serviceAccount: &serviceAccount "experience-cloud-sa"
# Set to false if the Service Account already exists before Experience Cloud Foundation is installed
# to prevent Helm from attempting to manage it.
createServiceAccount: true
# ------------------------------------
# Vault Options
# ------------------------------------
# These Vault options are meant to be used with OpenText's internal Vault system. For installations
# outside of OpenText's internal managed services, these options should be ignored.
# The address to access the Vault instance.
vaultUrl: &vaultUrl
# The namespace where vault is present. private-cloud is default namespace
vaultNamespace: &vaultNamespace private-cloud
# vault authentication type
vaultAuthenticationType: &vaultAuthenticationType jwt
# Secret engine path
vaultSecretEnginePath: &vaultSecretEnginePath
# Authentication role
vaultAuthenticationRole: &vaultAuthenticationRole
# Authentication Path
vaultAuthenticationPath: &vaultAuthenticationPath
# OTDS Specific vault properties
# The flag to decide whether vault should be enabled for OTDS or not. Default value is false
vaultEnabled: &vaultEnabled false
# Secret engine path for OTDS
# Value should vaultSecretEnginePath suffixed with /otds
vaultOtdsSecretEnginePath: &vaultOtdsSecretEnginePath
# ------------------------------------
# Section to configure registry for services on Experience Cloud
# ------------------------------------
# The registry to pull the Experience Cloud containers from
experienceCloudRegistry: &experienceCloudRegistry registry.opentext.com
# The container registry to pull the OTDS images from. For example: docker-registry.company.com.
otdsRegistry: &otdsRegistry registry.opentext.com
# The registry for the Assisted Authoring container images
assistedAuthoringRegistry: &assistedAuthoringRegistry
# The registry for RMA service container image
rmaRegistry: &rmaRegistry
# ------------------------------------
# OTDS Options
# ------------------------------------
# The JDBC connection string to connect to the database. The database name must match the database
# that was created for OTDS. The format is:
# Postgres: jdbc:postgresql://<Database server URL>:<Databse port>/<Database name>
# MSSQL: jdbc:sqlserver://<Database server URL>:<Databse port>;databaseName=<Database name>;
# Oracle (SID): jdbc:oracle:thin:@<Database server URL>:<Databse port>:<SID>
# Oracle (Service name): jdbc:oracle:thin:@<Database server URL>:<Databse port>/<Service name>
# Below are a few examples showing Postgres, MSSQL and Oracle connection strings:
# jdbc:postgresql://postgres.domain.local:5432/otds
# jdbc:sqlserver://mssql.domain.local:1433;databaseName=otds;
# jdbc:sqlserver://mssql.domain.local:1433;databaseName=otds;encrypt=false
# jdbc:oracle:thin:@oracle.domain.local:1521/orcl
otdsDBUrl: &otdsDBUrl "jdbc:postgresql://dam-a.ecm.my:5432/otdsdb"
# The database user that was set up when creating the OTDS database. If using Oracle it is recommended to set
# otds.otdsws.otdsdb.useDefaultSchema to "true" or else OTDS will try to create another user called "OTDS".
otdsDBUser: &otdsDBUser otdsuser
# The database user password that was set up when creating the OTDS database.
otdsDBUserPassword: &otdsDBUserPassword demo.9700
# cryptKey is used for secure synchronized access to backend DB from frontend instances
# This value is a 16 character ASCII string that has been base64 encoded
# For example, "MTIzNDU2Nzg5YWNiZGVmZw==" is the base64 encoded value of "123456789acbdefg"
otdsCryptKey: &otdsCryptKey MTIzNDU2Nzg5YWNiZGVmZw==
# The user otadmin@otds.admin will be created in the OTDS System tenant. This parameter will set this
# user's password.
otadminPassword: &otadminPassword demo.9700
# ------------------------------------
# Rabbitmq Options
# ------------------------------------
# User to create in RabbitMQ
rabbitmqUsername: &rabbitmqAppUser expcloud
# Password for user to create in RabbitMQ
rabbitmqPassword: &rabbitmqAppPassword demo.9700
#RabbitMQ Admin username
rabbitmqAdminUsername: &rabbitmqAdminUsername admin
#RabbitMQ Admin password
rabbitmqAdminPassword: &rabbitmqAdminPassword demo.9700
# ------------------------------------
# Assisted Authoring Options
# ------------------------------------
# The JDBC connection string to connect to the database. The database name must match the database
# created before installing the Helm chart. The format is:
# Postgres: jdbc:postgresql://<Database server URL>:<Databse port>/<Database name>
# MSSQL: jdbc:sqlserver://<Database server URL>:<Databse port>;databaseName=<Database name>;
# Oracle (SID): jdbc:oracle:thin:@<Database server URL>:<Databse port>:<SID>
# Oracle (Service name): jdbc:oracle:thin:@<Database server URL>:<Databse port>/<Service name>
# Below are a few examples showing Postgres, MSSQL and Oracle connection strings:
# jdbc:postgresql://postgres.domain.local:5432/expai
# jdbc:sqlserver://mssql.domain.local:1433;databaseName=expai;
# jdbc:sqlserver://mssql.domain.local:1433;databaseName=expai;encrypt=false
# jdbc:oracle:thin:@oracle.domain.local:1521/orcl
expaiDBUrl: &expaiDBUrl "jdbc:postgresql://dam-a.ecm.my:5432/expai"
# The database user that was set up when creating the Experience AI database.
expaiDBUser: &expaiDBUser "expaiuser"
# The database user password that was set up when creating the Experience AI database.
expaiDBUserPassword: &expaiDBUserPassword "demo.9700"
# If Magellan Text Mining Service is installed, provide the URL and details here and a provider will be created
# with some default settings automatically. Leave this blank and all of the other Magellan settings below are ignored.
# The settings this provider are created with can be edited using the Config UI. The URL for Magellan as Provider.
# The format for this field is: http://<service name>.<namespace>.svc.cluster.local:<port>/rs/v2/. This provider will
# only be configured during an install and will not be configured during an upgrade.
providerMagellanUrl: &providerMagellanUrl
# The name of the Magallan Provider to be created
providerMagellanName: &providerMagellanName magellan
# The Magellan Text Mining Service user to use. This should match the field "credentials.engine.user" from the MTM Helm chart.
providerMagellanUser: &providerMagellanUser admin
# The Magellan Text Mining Service user password in base 64. This should match the field "credentials.engine.password" from the
# MTM Helm chart.
providerMagellanPassword: &providerMagellanPassword YWRtaW4K
# If LanguageTool is installed, provide the URL and details here and a provider will be created
# with some default settings automatically. Leave this blank and all of the other LanguageTool settings are ignored.
# The settings this provider are created with can be edited using the Config UI. The URL for Magellan as Provider.
# The format for this field is: http://<service name>.<namespace>.svc.cluster.local:<port>/v2/. This provider will
# only be configured during an install and will not be configured during an upgrade.
providerLanguageToolUrl: &providerLanguageToolUrl
# The name of the Language Tool Provider to be created
providerLanguageToolName: &providerLanguageToolName languagetool
# ------------------------------------
# RMA Options
# ------------------------------------
# RMA Database Settings
databaseHostname: &dbHost dam-a.ecm.my
databasePort: &dbPort 5432
# Database type can be "POSTGRESQL","SQLSERVER" or "ORACLE"
databaseType: &dbType "POSTGRESQL"
rmaDBName: &rmaDBName
rmaDBUser: &rmaDBUser rmauser
rmaDBPassword: &rmaDBPassword demo.9700
# ------------------------------------
# Section to define proxy for Experience Cloud Services
# ------------------------------------
#Define if proxy is enabled or not
proxyEnabled: &proxyEnabled false
#Proxy host
proxyHost: &proxyHost
#Http Proxy port
httpProxyPort: &httpProxyPort 3128
# No proxy defines a list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying.
# If using a proxy, be sure to include "*.svc.cluster.local" in the noProxy list to avoid having internal traffic proxied.
noProxy: &noProxy "*.svc.cluster.local"
# ------------------------------------
# NewRelic Options
# ------------------------------------
# Enable or disable NewRelic monitoring.
newRelicEnabled: &newRelicEnabled false
# The license key provided by NewRelic.
newRelicLicenseKey: &newRelicLicenseKey ""
# The proxy host to use for NewRelic.
newRelicProxyHost: &newRelicProxyHost ""
# The port number to use with the proxy host for NewRelic.
newRelicProxyPort: &newRelicProxyPort
# The scheme to use with the proxy host and port. Values can be "http" or "https".
newRelicProxyScheme: &newRelicProxyScheme ""
# The application name to use for OTDS in NewRelic
newRelicOTDSAppName: &newRelicOTDSAppName ""
# Suffix to add to the end of the application name for Experience AI's API and Config pods.
newRelicAssistedAuthoringSuffix: &newRelicAssistedAuthoringSuffix ""
# The application name to use for Experience Cloud persistence service in NewRelic
newRelicExpCloudAppName: expcloud
# ===================================================
# END: ANCHOR tags section
# ===================================================
# ===================================================
# BEGIN: Detailed product settings. Any setting
# below here is either optional or can be left
# at the default setting.
# ===================================================
# ------------------------------------
# Global Options
# ------------------------------------
# Add extra pod match labels for Experience AI and RabbitMQ deployments, statefulsets, pods and services globally
# Other serices (Admin UI, OTDS, RMA) do not yet support this.
# Example:
# extraPodMatchLabels: &extraPodMatchLabels
# matchkey: matchvalue
extraPodMatchLabels: &extraPodMatchLabels {}
# ------------------------------------
# Experience Cloud Detailed Options
# ------------------------------------
tenant:
# The name of the tenant to create in OTDS
name: *tenantName
# The username for admin account when creating the tenant
adminUsername: *tenantAdminUsername
# The password to give the "admin" account when creating the tenant
adminPassword: *tenantAdminPassword
# The name of the OTDS oAuth client to be created.
oAuthClientName: experience-cloud
# The name of the public (not confidential) OTDS oAuth client to be created.
publicOAuthClientName: experience-cloud-public
# The name of the OTDS partition to be created.
partition: experience-cloud
# The password for the otexpcadmin user
expCloudPassword: *tenantAdminPassword
# Provide the name and password for a user and the user will be created with admin permissions to Experience AI,
# Experience Cloud (read only) and Rich Media Analysis. Any services not enabled will be skipped. The password must adhere to
# the password policy. See comments on "tenantAdminPassword" for details on password policy.
monitoringUser:
monitoringUserPassword:
# Provide the name and password for a user and the user will be created with admin permissions to Experience AI,
# Experience Cloud and Rich Media Analysis. Any services not enabled will be skipped. The password must adhere to
# the password policy. See comments on "tenantAdminPassword" for details on password policy.
businessAdminUser:
businessAdminUserPassword:
# The repository to pull the Experience Cloud containers from
registry: *experienceCloudRegistry
config:
# The config pod image
image: experience-cloud-config
# The tag of the config pod
tag: 24.4.0
# The service account the config pod should run as.
serviceAccount: *serviceAccount
# Security context for config job container
containerSecurityContext:
allowPrivilegeEscalation: false
# Security context for config job pod
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
# Annotations to include on config pod. Example:
# podAnnotations:
# annotation-key: 'annotation-value'
podAnnotations: {}
# Labels to include on config pod. Example:
# podLabels:
# label-key: 'label-value'
podLabels: {}
# The number of seconds curl is allowed to wait when communicating with services
curlTimeout: 10
service:
# Number of replicas for the Experience Cloud Persistence Service/Admin UI deployment.
replicas: 1
# The database schema to use for the admin service
databaseSchema: expcloud
# The service pod image
image: experience-cloud-service
# The tag of the service pod
tag: 24.4.0
# The deployment upgrade strategy. Valid values are "Recreate" or "RollingUpdate". Default is "RollingUpdate".
upgradeStrategy: RollingUpdate
# The service account for the service pod to use.
serviceAccount: *serviceAccount
annotations: {}
ingress:
# The name of the secret that contains a TLS certificate for the Ingress Controller used with Experience Cloud Persistence Service/Admin UI deployment.
# For more information about creating a secret with a TLS certificate, see the Kubernetes documentation.
# If no certificate is provided, the Ingress Controller will use the default certificate.
# This setting is Optional.
secret: *ingressSecret
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
resources:
requests:
# The CPU request for the Experience Cloud Service container. The default value is 0.5. OpenText recommends 0.5 for
# a small installation and 1 for medium or large installations. This setting is Optional.
cpu: 0.5
# The memory request for the Experience Cloud Service container. The default value is 1.5Gi. OpenText recommends
# 1.5 for all installation sizes. This setting is Optional.
memory: 1.5Gi
jvmOptions: "-Xms512m -Xmx1024m"
# Security context for experience cloud service container
containerSecurityContext:
allowPrivilegeEscalation: false
# Security context for experience cloud service pod
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
# Annotations to include on Admin UI pods. Example:
# podAnnotations:
# annotation-key: 'annotation-value'
podAnnotations: {}
# Labels to include on Admin UI pods. Example:
# podLabels:
# label-key: 'label-value'
podLabels: {}
# Provide a custom truststore for the Admin UI so that it can trust certificates that are not trusted by default in Java
truststore:
# The name of the secret that contains the truststore file. Create with:
# kubectl create secret generic <name> -n <namespace> --from-file=<truststorePath>
# Ex. kubectl create secret generic custom-truststore -n expcloud --from-file=/tmp/cacerts
secretName:
# The password for the truststore file provided in the secret above. If left blank, the Admin UI will use the default truststore
password:
# The file name of the truststore that was created in the secret above
fileName: cacerts
# ------------------------------------
# OTDS Detailed Options
# ------------------------------------
# These are the settings to configure OTDS.
otds:
enabled: *otdsEnabled
global:
otdsUseReleaseName: true
namespace:
imageSource:
imageSourcePublic:
imagePullSecret:
imagePullPolicy:
serviceAccountName:
serviceType:
otdsServiceName: otdsws
resourceRequirements: true
existingSecret:
timeZone: Etc/UTC
database:
adminDatabase:
adminUsername:
extraPodMatchLabels: *extraPodMatchLabels
ingress:
enabled: *ingressEnabled
secret: *ingressSecret
class: *ingressClass
annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
exposeIndividualEndpoints: false
paths:
- "/otds-admin/"
- "/otdstenant/"
- "/otdsws/"
- "/ot-authws/"
- "/otds-v2/"
otdsws:
securityContext:
fsGroup: 1000
runAsGroup: 1000
runAsUser: 1000
readOnlyRootFilesystem: false
podLabels: {}
podAnnotations: {}
enabled: true
serviceAccountName: *serviceAccount
statefulSet: false
affinity:
tolerations:
topologySpreadConstraints:
ingress:
enabled: *ingressEnabled
secret:
# If changing the prependPath from the default, please be sure set expai.global.env.OTDS_INTERNAL_ADDRESS_OVERRIDE if using
# Experience AI and mediaanalysis.global.internal.otds.prependPath if using RMA.
prependPath: otds
serviceName: otdsws
serviceType:
serviceAnnotations: {}
carrierGradeNAT: true
customSecretName:
replicas: 1
port: 80
publicHostname: *publicHostname
timeZone:
allowDuplicateUsers: true
allowNonIndexedSearch: false
systemGlobalOAuthClients: "ExstreamSystem"
cryptKey: *otdsCryptKey
additionalJavaOpts:
enableBootstrapConfig: false
existingBootstrapConfig: |
kerberos:
enabled: false
keytabFile: |
configFile: |
adminEmail:
adminPassword: *otadminPassword
isBizAdmin: false
enableCustomizedTruststore: false
singleCaCert: |
migration:
enabled: false
usingLegacyImage: false
legacyImagePVC:
serviceName: opendj
servicePort: 1389
deploymentName:
opendjUri:
password:
preUpgradeJob:
enabled: false
timeout: 100h
resources:
requests:
cpu: 0.5
memory: 3Gi
limits:
cpu: 2
memory: 3Gi
jvmMemory:
image:
source:
name: bitnami/kubectl
tag: latest
otdsdb:
url: *otdsDBUrl
username: *otdsDBUser
password: *otdsDBUserPassword
useDefaultSchema: false
automaticDatabaseCreation:
enabled: false
dbAdmin:
dbAdminPassword:
dbExtensions:
- pg_trgm
dbImage:
source:
name: bitnami/postgresql
tag: latest
pullPolicy:
image:
source: *otdsRegistry
name: otds-server
tag: 24.4.0
pullPolicy: *imagePullPolicy
pullSecret: *imagePullSecret
resources:
enabled: true
requests:
cpu: 0.5
memory: 1Gi
limits:
cpu: 2
memory: 1.5Gi
newrelic:
NEW_RELIC_LICENSE_KEY: *newRelicLicenseKey
NEW_RELIC_APP_NAME: *newRelicOTDSAppName
NEW_RELIC_LOG_FILE_NAME: STDOUT
NEW_RELIC_LOG_LEVEL: info
NEW_RELIC_BROWSER_MONITORING_AUTO_INSTRUMENT: "false"
pvc:
enabled: false
storage: 256Mi
storageClassName:
logging:
logToFiles: false
logToPVC: false
logRequests: true
vault:
enabled: *vaultEnabled
agentInjector: false
url: *vaultUrl
namespace: *vaultNamespace
authpath: *vaultAuthenticationPath
tokenAudience:
proxyAddress:
role: *vaultAuthenticationRole
secretsPath: *vaultOtdsSecretEnginePath
initContainers:
# ------------------------------------
# rabbitmq Options
# ------------------------------------
rabbitmq:
# This determines whether rabbitmq is deployed or not.
enabled: *rabbitmqEnabled
# Expose the rabbit console through ingress
global:
extraPodMatchLabels: *extraPodMatchLabels
extraConfiguration: |
management.path_prefix = /rabbitmq
consumer_timeout = 172800000
auth:
## @param auth.username RabbitMQ application username
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
username: *rabbitmqAdminUsername
## @param auth.password RabbitMQ application password
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
password: *rabbitmqAdminPassword
## @param auth.erlangCookie Erlang cookie to determine whether different nodes are allowed to communicate with each other
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
erlangCookie: erlangCookie
# The address to access the Vault instance.
vaultUrl: *vaultUrl
# The namespace where vault is present. private-cloud is default namespace
vaultNamespace: *vaultNamespace
# vault authentication type
vaultAuthenticationType: *vaultAuthenticationType
# Secret engine path
vaultSecretEnginePath: *vaultSecretEnginePath
# Authentication role
vaultAuthenticationRole: *vaultAuthenticationRole
# Authentication Path
vaultAuthenticationPath: *vaultAuthenticationPath
image:
# The container registry to pull the images from, as configured in Push the container image. For
# example: docker-registry.company.com.
registry: *experienceCloudRegistry
# Docker image pull policy: Always, Never, or IfNotPresent
# See: https://kubernetes.io/docs/concepts/configuration/overview/#container-images
pullPolicy: *imagePullPolicy
# The repository name where the image is located. For
# example: rabbitmq
repository: experience-cloud-rabbitmq
# The image tag to use.
tag: 3.13.6
# Image pull secret to use for registry authentication.
# See: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
pullSecrets: *imagePullSecrets
## Configure the ingress resource that allows you to access the
## RabbitMQ installation. Set up the URL
## ref: https://kubernetes.io/docs/user-guide/ingress/
##
ingress:
## @param ingress.enabled Enable ingress resource for Management console
##
enabled: *ingressEnabled
## @param ingress.hostname Default host for the ingress resource
##
hostname: *publicHostname
## @param ingress.path Path for the default host.
##
path: /rabbitmq
# The Ingress Controller class value. For example, "nginx". This setting is Optional.
ingressClassName: *ingressClass
## @param ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations.
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
## @param ingress.secrets Custom TLS certificates as secrets
## NOTE: 'key' and 'certificate' are expected in PEM format
## NOTE: 'name' should line up with a 'secretName' set further up
## If it is not set and you're using cert-manager, this is unneeded, as it will create a secret for you with valid certificates
## If it is not set and you're NOT using cert-manager either, self-signed certificates will be created valid for 365 days
## It is also possible to create and manage the certificates outside of this helm chart
## Please see README.md for more information
## e.g:
## secrets:
## - name: rabbitmq.local-tls
## key: |-
## -----BEGIN RSA PRIVATE KEY-----
## ...
## -----END RSA PRIVATE KEY-----
## certificate: |-
## -----BEGIN CERTIFICATE-----
## ...
## -----END CERTIFICATE-----
##
secrets: []
secretName: *ingressSecret
# Was rabbitmqMemoryHighWatermark
memoryHighWatermark:
enabled: true
podManagementPolicy: Parallel
# Was persistentVolume
persistence:
storageClass: *rwoStorageClass
# Have to include this, even though we set memoryHighWatermark to absolute, or rabbitmq's helm chart will puke on install
resources:
requests:
cpu: 0.5
memory: 1Gi
limits:
cpu: 2
memory: &rabbitmqMemory "2Gi"
service:
type: ClusterIP
## @param containerSecurityContext RabbitMQ containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## Example:
## containerSecurityContext:
## capabilities:
## drop: ["NET_RAW"]
## readOnlyRootFilesystem: true
##
containerSecurityContext:
allowPrivilegeEscalation: false
## RabbitMQ pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param podSecurityContext.enabled Enable RabbitMQ pods' Security Context
## @param podSecurityContext.fsGroup Group ID for the filesystem used by the containers
## @param podSecurityContext.runAsUser User ID for the service user running the pod
podSecurityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001
## RabbitMQ pods ServiceAccount
## @param serviceAccount.name Name of the created serviceAccount
## If not set and create is true, a name is generated using the rabbitmq.fullname template
## If not set and create is not set, "default" service account would be used. This is the default value
serviceAccount:
## @param serviceAccount.create Enable creation of ServiceAccount for RabbitMQ pods
##
create: ""
## @param serviceAccount.name Name of the created serviceAccount
## If not set and create is true, a name is generated using the rabbitmq.fullname template
##
name: *serviceAccount
# ---- metadata annotations ----
## @param podAnnotations RabbitMQ Pod annotations. Evaluated as a template
podAnnotations: {}
## @param podLabels RabbitMQ Pod labels. Evaluated as a template
podLabels: {}
## @param terminationGracePeriodSeconds Default duration in seconds k8s waits for container to exit before sending kill signal.
## Any time in excess of 10 seconds will be spent waiting for any synchronization necessary for cluster not to lose data.
##
terminationGracePeriodSeconds: 60
# ------------------------------------
# Assisted Authoring Detailed Options
# ------------------------------------
expai:
# Any users added to this group will be administrators of Assisted Authoring. This group will be created by Experience
# Cloud
adminGroup: &adminGroup expai-admins
# This is an optional section to have Experience Cloud set up some default providers for Assisted Authoring. Leave
# both providers URL's blank to have Experience Cloud not create any providers automatically. These can be created
# using the Assisted Authoring Config UI as well.
providers:
magellan:
# If Magellan Text Mining Service is installed, provide the URL and details here and a provider will be created
# with some default settings automatically. Leave this blank and all of the other Magellan settings are ignored.
# The settings this provider are created with can be edited using the Config UI. The format for this field is:
# http://<service name>.<namespace>.svc.cluster.local:<port>/rs/v2/
url: *providerMagellanUrl
# The name of the provider to be created
name: *providerMagellanName
# The Magellan Text Mining Service user to use
user: *providerMagellanUser
# The Magellan Text Mining Service user password in base 64
password: *providerMagellanPassword
languagetool:
# If LanguageTool is installed, provide the URL here and a provider will be created with some default settings
# automatically. Leave this blank and all of the other LanguageTool settings are ignored. The settings this
# provider are created with can be edited using the Config UI. The format for this field is:
# http://<service name>.<namespace>.svc.cluster.local:<port>/v2/
url: *providerLanguageToolUrl
# The name of the provider to be created
name: *providerLanguageToolName
enabled: *assistedAuthoringEnabled
# The default language code to use when one is not provided in the request
defaultLanguage: en-US
global:
# The address to access the Vault instance.
vaultUrl: *vaultUrl
# The namespace where vault is present. private-cloud is default namespace
vaultNamespace: *vaultNamespace
# vault authentication type
vaultAuthenticationType: *vaultAuthenticationType
# Secret engine path
vaultSecretEnginePath: *vaultSecretEnginePath
# Authentication role
vaultAuthenticationRole: *vaultAuthenticationRole
# Authentication Path
vaultAuthenticationPath: *vaultAuthenticationPath
# The repository for the Assisted Authoring container images
repository: *assistedAuthoringRegistry
imagePullSecrets: *imagePullSecrets
ingress:
host: *publicHostname
secret:
name: *ingressSecret
class: *ingressClass
# The service account name.
serviceAccount:
name: *serviceAccount
env:
HTTP_PROXY: *proxyHost
PROXY_PORT: *httpProxyPort
NO_PROXY: *noProxy
DATABASE_URL: *expaiDBUrl
DATABASE_USER: *expaiDBUser
DATABASE_PASSWORD: *expaiDBUserPassword
DATABASE_SCHEMA: "expai"
DATABASE_CONNECTION_MAX_LIFETIME: 600000
# Format is "http://<releaseName>-otdsws.<namespace>.svc.cluster.local:80/<OTDS prepend path>/otdsws". If OTDS is using a blank
# prepend path, remove the additional / to make it a valid URL.
OTDS_INTERNAL_ADDRESS_OVERRIDE: ""
enable_probes: true
newrelic:
agent_enabled: *newRelicEnabled
license_key: *newRelicLicenseKey
proxy_host: *newRelicProxyHost
proxy_port: *newRelicProxyPort
proxy_scheme: *newRelicProxyScheme
application_name_suffix: *newRelicAssistedAuthoringSuffix
extraPodMatchLabels: *extraPodMatchLabels
api:
enabled: true
replicaCount: 1
deployment:
imagePullPolicy: *imagePullPolicy
nodeSelector: {}
otds_authentication: false
readinessProbe:
enabled: true
livenessProbe:
enabled: true
sslSecret:
enabled: false
name: ""
cacerts: false
expaiP12: false
service:
annotations: {}
containerSecurityContext:
allowPrivilegeEscalation: false
ingress:
enabled: *ingressEnabled
annotations:
# Add any necessary ingress annotations. If using OpenShift, use annotation:
# route.openshift.io/termination: "edge"
env:
DEFAULT_TENANT: *tenantName
administrator_system_role: *adminGroup
EAIS_JAVAX_NET_SSL_TRUSTSTORE: ""
EAIS_JAVAX_NET_SSL_TRUSTSTORE_PASSWORD: ""
EAIS_HOST_WHITELIST_REGEX: ".*"
config:
enabled: true
replicaCount: 1
deployment:
imagePullPolicy: *imagePullPolicy
nodeSelector: {}
readinessProbe:
enabled: true
livenessProbe:
enabled: true
sslSecret:
enabled: false
name: ""
cacerts: false
expaiP12: false
service:
annotations: {}
containerSecurityContext:
allowPrivilegeEscalation: false
ingress:
enabled: *ingressEnabled
annotations:
# Add any necessary ingress annotations. If using OpenShift, use annotation:
# route.openshift.io/termination: "edge"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "expai-session-affinity"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
env:
administrator_url: ""
administrator_system_role: *adminGroup
EAIS_JAVAX_NET_SSL_TRUSTSTORE: ""
EAIS_JAVAX_NET_SSL_TRUSTSTORE_PASSWORD: ""
EAIS_HOST_WHITELIST_REGEX: ".*"
USE_HTTPS: *useHttps
PUBLIC_HOSTNAME: *publicHostname
# ------------------------------------
# RMA Detailed Options
# ------------------------------------
mediaanalysis:
enabled: *rmaEnabled
extensionImages: []
# - name:
# image: "<docker registry URL for extension image>"
# - name:
# image: "<docker registry URL for extension image>"
# ---- metadata annotations ----
# Example:
# podAnnotations:
# backup.velero.io/backup-volumes-excludes: 'rma-custom-volume'
# podLabels:
# app.kubernetes.io/component: 'mediaanalysis'
# containerSecurityContext:
# readOnlyRootFilesystem: true
podAnnotations: {}
podLabels: {}
containerSecurityContext: {}
global:
useReleaseName: true
product: "RMA"
namespace:
imagePullPolicy: *imagePullPolicy
imagePullSecret: *imagePullSecret
repository: *rmaRegistry
secureEndpoints: *useHttps
publicHostName: *publicHostname
newRelicLicenseKey: *newRelicLicenseKey
cfcrBaseUrl: &cfcr_base_url
# Internal Kubernetes hostnames for OTDS + RMA
internal:
otds:
# if the value is blank,it defaults to "<releasename>-otdsws" or "otdsws" depending on whether "useReleaseName" property is set to true or false
host:
# The prepend path should have otds prepend path prefix with forward slash
prependPath: /otds
rabbitmq:
# if the value is blank,it defaults to "<releasename>-rabbitmq" or "rabbitmq" depending on whether "useReleaseName" property is set to true or false
host:
port: 5672
# The prepend path should have rabbitmq prepend path prefix with forward slash
prependPath : /rabbitmq
auth:
rabbitmqDefaultUser: *rabbitmqAppUser
rabbitmqPassword: *rabbitmqAppPassword
keystorePass:
# Configure the database backend for persisting RMA subscriptions
database:
type: *dbType
host: *dbHost
port: *dbPort
name: *rmaDBName
user: *rmaDBUser
password: *rmaDBPassword
service:
name: rma
type: ClusterIP
port: 8093
container:
name: otmm_rma
tag: 24.4.0
nodeSelector:
useNodeSelector: false
ingress:
enabled: *ingressEnabled
class: *ingressClass
secret: *ingressSecret
type: *ingressClass
# PROXY settings
javaOptions:
proxy: *proxyEnabled
proxyhost: *proxyHost
proxyport: *httpProxyPort
noproxy: *noProxy
# Service account name and whether RMA should create service account
serviceAccount:
name: *serviceAccount
create: false
# vault properties for RMA
vault:
server:
# address of the vault server expressed as a URL and port, e.g., http://1.2.3.4:8200
address: *vaultUrl
namespace: *vaultNamespace
# if a custom vault secrets engine root is used, specify it here or leave it blank.
secretEngineRoot: *vaultSecretEnginePath
agent:
auth:
# this is the Kubernetes authentication path name
type: *vaultAuthenticationType
path: *vaultAuthenticationPath
role: *vaultAuthenticationRole
configmaps:
#
# Media Analysis properties
# -------------------------------------------------
# Specify image analysis provider: ["Azure", "Google", "AWS"], default is "Azure"
MEDIAANALYSIS_IMAGE_PROVIDER: ''
# Specify video analysis provider: ["Azure", "Google"], default is "Azure"
MEDIAANALYSIS_VIDEO_PROVIDER: ''
RABBITMQ_SECRETS_KEY: rabbitmq
RABBITMQ_SECRETS_USERNAME_KEY: appUsername
RABBITMQ_SECRETS_PASSWORD_KEY: appPassword
# -------------------------------------------------------------
# Experience Cloud Persistence Service Logging Detailed Options
# -------------------------------------------------------------
# Experience Cloud Persistence Service Logging Level
expcloudServiceLoggingLevel: INFO
# Experience Cloud Persistence Service Hibernate SQL Logging Level
expcloudServiceHibernateLoggingLevel: ERROR
# Experience Cloud Persistence Service Spring framework Security Logging Level
expcloudServiceSpringframeworkSecurityLogging: ERROR
# Experience Cloud Persistence Service Spring framework Web Logging Level
expcloudServiceSpringframeworkWebLogging: ERROR
global:
# This is to decide whether to create config-maps in subcharts or not. No need to change this value.
product: "experience-cloud"
# values.custom.yaml파일내용중 imagePullSecrets의 opentext-docker-secret는 다음 명령어로 생성
kubectl create secret docker-registry opentext-docker-secret \
--docker-server=[registry.opentext.com](http://registry.opentext.com/) \
[--docker-username=kwpark@penta.co.kr](mailto:--docker-username=kwpark@penta.co.kr) \
--docker-password=SystemAdmin@2024 \
--namespace otxf
kubectl delete secret opentext-docker-secret -n otxf
10단계. 설치
cd opentext-experience-cloud
kubectl create namespace otxf
helm uninstall expcloud -n otxf
kubectl delete pvc --all -n otxf
helm install expcloud -n otxf -f values_expcloud.yaml \
-f resource-df.yaml \
.
helm upgrade expcloud -i -n otxf -f values_expcloud.yaml \
-f resource-df.yaml \
.
kubectl rollout restart -n otxf deployment/expcloud-rma
kubectl rollout restart -n otxf statefulset/expcloud-rabbitmq
helm install expcloud ./ -n expcloud -f resources/resource-test.yaml -f values.expcloud.yaml --timeout 10m0s --wait
helm install expcloud ./ -n otxf \
-f resources/resource-test.yaml \
-f ecf-config.yaml \
--timeout 10m0s --wait
helm upgrade expcloud ./ -n otxf \
-f resources/resource-test.yaml \
-f ecf-config.yaml \
--timeout 10m0s --wait
helm install expcloud ./ -n otxf \
-f resources/resource.yaml \
-f nginx-annotations.yaml \
-f values.custom.yaml \
--timeout 10m0s --wait
helm upgrade otxf ./ -n otxf \
-f resources/resource.yaml \
-f nginx-annotations.yaml \
-f values.custom.yaml \
--timeout 10m0s --wait
RMA용 사용자 생성 (이미 안 했을 경우) RabbitMQ Pod에 접속해서 사용자 생성:
kubectl exec -it -n otxf expcloud-rabbitmq-0 -- bash
rabbitmqctl add_user expcloud Admin@2024
rabbitmqctl set_user_tags expcloud none
rabbitmqctl set_permissions -p / expcloud ".*" ".*" ".*"
rabbitmqctl list_users
성공메시지
damadmin@dam-a:~/opentext-experience-cloud$ helm upgrade expcloud -i -n otxf -f values_expcloud.yaml \
> -f resource-df.yaml \
> .
Release "expcloud" has been upgraded. Happy Helming!
NAME: expcloud
LAST DEPLOYED: Tue Jun 17 14:12:04 2025
NAMESPACE: otxf
STATUS: deployed
REVISION: 2
NOTES:
Experience Cloud has been installed. Below you will find the details of the configuration:
OTDS Tenant: otxf
OTDS Tenant Initial Credentials: otadmin@otds.admin/Admin@2024
OTDS Host: https://dam-a.penta.co.kr
OTDS System Tenant Admin URL: https://dam-a.penta.co.kr/otds-admin
OTDS Tenant Admin URL: https://dam-a.penta.co.kr/otdstenant/otxf/otds-admin
OTDS Tenant Partition Created: experience-cloud
OTDS Tenant oAuth Client Created: experience-cloud
RabbitMQ URL: https://dam-a.penta.co.kr/rabbitmq/
RMA Installed: true
Below is a YAML formatted Experience Cloud configuration:
otds:
tenant: "otxf"
tenantAdmin: "otadmin@otds.admin"
tenantAdminPassword: "Admin@2024"
externalUrl: "https://dam-a.penta.co.kr/otdsws/otdstenant/otxf"
externalHost: "dam-a.penta.co.kr"
internalUrl: "http://expcloud-otdsws.otxf.svc.cluster.local:80"
internalHost: "expcloud-otdsws.otxf.svc.cluster.local"
internalPort: 80
prependPath: ""
rabbitmq:
externalUrl: "https://dam-a.penta.co.kr/rabbitmq/"
internalUrl: "http://expcloud-rabbitmq.otxf.svc.cluster.local:5672"
internalHost: "expcloud-rabbitmq.otxf.svc.cluster.local"
internalPort: 5672
user: "expcloud"
userPassword: "Admin@2024"
adminUser: "admin"
adminUserPassword: "Admin@2024"
prependPath: "/rabbitmq"
rma:
internalUrl: "http://expcloud-rma.otxf.svc.cluster.local:8093"
admin: "otexpcadmin"
adminPassword: "Admin@2024"
To save the YAML configuration to a file, run the following script:
opentext-experience-cloud/scripts/manual-scripts/saveYamlToFile.sh otxf expcloud-values.yaml
reset-otxf-install.sh
#!/bin/bash
set -e
NAMESPACE=otxf
RELEASE_NAME=expcloud
echo "🔁 1. Helm 릴리스 제거..."
helm uninstall "$RELEASE_NAME" -n "$NAMESPACE" || echo "⚠️ Helm 릴리스가 존재하지 않음"
echo "🧹 2. PVC 제거..."
kubectl delete pvc --all -n "$NAMESPACE" || true
echo "🧹 3. Secret 제거..."
kubectl delete secret rabbitmq -n "$NAMESPACE" --ignore-not-found=true
kubectl delete secret --all -n "$NAMESPACE" || true
echo "🧹 4. ConfigMap 제거..."
kubectl delete configmap --all -n "$NAMESPACE" || true
echo "🧨 5. 네임스페이스 삭제 (잠시 기다림)..."
kubectl delete ns "$NAMESPACE" || true
sleep 5
echo "⏳ 네임스페이스 삭제 완료까지 대기..."
while kubectl get ns "$NAMESPACE" >/dev/null 2>&1; do
echo "⏳ 아직 삭제 중... 기다리는 중..."
sleep 3
done
echo "✅ 네임스페이스 완전히 삭제됨"
echo "🚀 6. 네임스페이스 재생성..."
kubectl create ns "$NAMESPACE"
echo "🏁 초기화 완료. 다음 명령으로 설치를 진행하세요:"
echo ""
echo "helm install $RELEASE_NAME ./ -n $NAMESPACE \\"
echo " -f ecf-config.yaml \\"
echo " -f resources/resource-test.yaml \\"
echo " --timeout 10m0s --wait"