Server

The Sophora server manages the entire content repository. There are three different roles a Sophora server can have: Master, Staging and Fall-back.

Table of Contents

Throughout this page the string "cms-directory" refers to the installation directory of the entire Sophora application.

Directory Structure

The directory cms-directory always contains the following folders:

  • apps – The directory apps contains the software components used in your Sophora environment. Amongst others, these are the Apache, Tomcat and the Sophora server libraries. For each additional Sophora component, like the Sophora Importer, the Sophora Indexer or the Sophora Linkchecker, a separate directory is created on the same level as the apps directory. These folders contain the components' configuration files.
  • The apps directory itself also contains symbolic links to the actual (release dependent) files or directories. Thus, you only have to edit these links in here when updating the server's or components' software. For example, the link cms-directory/apps/sophora refers to the directory cms-directory/apps/com.subshell.sophora.server-1.30.0. Such links exist for every installed Sophora component.
  • sophora – This folder contains the actual instance of the particular Sophora server. It is basically the workspace of the Sophora server installed in the apps directory. In addition, there is a link sophora.sh -> ../apps/sophora/sophora.sh which refers to the start and stop script of the actual Sophora server version (this link uses the 'sophora' link from the apps directory).
  • webapps – here, the web applications configured for this Sophora server are located. Each web application has its own subdirectory (named after the application's context).

The entire directory structure is shown subsequently:

----cms-directory
--------apps
------------apache2
------------apache-tomcat-6.0.24
------------...
------------com.subshell.sophora.server-1.30.0
----------------...
----------------sophora.sh
------------...
------------sophora -> Symbolic link to com.subshell.sophora.server-1.30.0
------------...
--------sophora
------------config
------------data
------------logs
------------repository
------------repository_archive
------------sophora.sh -> Symbolic link to ../apps/sophora/sophora.sh
--------webapps
------------[contextName1]
----------------cache
----------------conf
----------------logs
----------------webapp
------------[contextName2]
----------------cache
----------------conf
----------------logs
----------------webapp
------------[...]
--------Sophora-component-1
--------Sophora-component-2

Configuring the Sophora Server

The central configuration file of each Sophora server is called sophora.properties and is located within the config folder of the sophora directory. Subsequently, an exemplary configuration is displayed:

# Vmargs for the java process
vmargs=-XX:+UseParallelGC -XX:+UseParallelOldGC -Xss512K -Xmn640M -Xms2G -Xmx2G -XX:PermSize=128m -XX:-UseGCOverheadLimit
# Installation directory of the Sophora server
sophora.home=/cms/ts/sophora
# RMI ports
#sophora.rmi.servicePort=1198
#sophora.rmi.registryPort=1199
sophora.remote.jmsbroker.port=1397
# For fall-back and staging slaves you have to specify the master server here
sophora.remote.jmsbroker.host=194.113.141.96
sophora.replication.slaveHostname=server1
# Type of the server. Possible values are "cluster", "master", "slave" (meaning the fall-back slave) and "stagingslave"
sophora.replication.mode=stagingslave

Configuration Parameters of the sophora.properties File

The following table lists all valid parameters for the sophora.properties file together with an according explanation.

ParameterDescription
vmargsParameters for the java process.
sophora.homeWorkspace of the Sophora server. This folder contains the subdirectories config, data, logs, repository, repository_archive.
sophora.rmi.servicePortRMI port. Default: 1198
sophora.rmi.registryPortPort of the RMI registry. Default: 1199
sophora.remote.api.http.addressIP address to bind the HTTP port to
sophora.remote.api.http.portHTTP port to access the content manager API via HTTP. If this property is blank, the port will be calculated as follows: sophora.remote.api.http.port = sophora.rmi.registryPort - 3. Thus, by default this is 1196.
sophora.remote.api.https.portHTTPS port to access the content manager API via HTTPS. If this property is blank, the port will be calculated as follows:
sophora.remote.api.https.port = sophora.rmi.registryPort - 4. Thus, by default this is 1195.
sophora.remote.api.https.enabledDefines whether the HTTPS port is enabled or not (Default: false). For client connection using HTTPS see client configuration.
sophora.remote.api.https.password(Default: sophora)
sophora.remote.api.https.keyPassword(Default: sophora)
sophora.remote.api.https.keyStoreThe file name for the keystore. This file must be located in the directory sophora.home/config. (Default: "sophora.keystore")
sophora.http.proxy.hostHTTP proxy host address for the RemoteDataManager
sophora.http.proxy.portHTTP proxy port for the RemoteDataManager
sophora.http.proxy.usernameHTTP proxy username for the RemoteDataManager
sophora.http.proxy.passwordHTTP proxy password for the RemoteDataManager
sophora.http.proxy.noProxyList of excluded host names. Default is "127.0.0.1,localhost" (Since 2.3.43, 2.4.22, 2.5.10, 2.6.0)
sophora.jmx.enabledActivate the JMX interface: true or false
sophora.jmx.username, sophora.jmx.passwordUsername and password for the JMX interface (optional). If no username and password are given, no authentication is required to use the JMX interface.
sophora.local.jmsbroker.port,
sophora.remote.jmsbroker.port
Use sophora.local.jmsbroker.port to specify the port of the actual configured sophora server and sophora.remote.jmsbroker.port to specify the port of the sophora server to which this server is connected.

On staging slaves only the property sophora.remote.jmsbroker.port is required. But as staging slaves do not have an embeded JMS Broker the configuration of sophora.local.jmsbroker.port is not necessary.
Default is 1197.
sophora.local.jmsbroker.host,
sophora.remote.jmsbroker.host
Use sophora.local.jmsbroker.host to specify the host name or ip address of the embedded JMS Broker. sophora.remote.jmsbroker.host refers to the host name or ip address of its master server.
On staging slaves sophora.remote.jmsbroker.host must be configured. But as staging slaves does not have an embeded JMS Broker the configuration of sophora.local.jmsbroker.host is not necessary.
Default is localhost.
sophora.replication.modeType of the server. Possible values are "none", "cluster", "master", "slave" (meaning the fall-back slave) and "stagingslave". With "none" the Server has no connection to other servers but works as a standalone server. By using the replication mode cluster it is mandatory to specify the concrete mode via the system property clusterMode. Valid values for the system property are: master, slave and open (the server which starts first becomes the master server).
sophora.replication.slaveHostnameHost name of the server. This name is used for the communication between sophora servers. If this property is left blank, the host name will be determined automatically.
It is mandatory to specify the hostname in a cluster server.
sophora.cluster.readAnywhere.availableSetting the slave server available for readAnywhere connections, the default value is: true
Change the state in JMX, go to ContentManager-MBean and you will see the current value for ReadAnywhereAvailable. There is also a operation toggleReadAnywhereStatus() to toggle the value.
sophora.replication.delivery.<index>.urlURL of the delivery that is bound to this server. Hereby <index> is a key to associate certain delivery groups (see sophora.replication.delivery.<index>.groups). For each URL there has to be an assigned group with the same <index>.
sophora.replication.delivery.<index>.groupsAssociates the delivery to at least one group (see sophora.replication.delivery.<index>.url). Multiple groups are asscociated using a comma separated list of group names (an example is given below).
sophora.replication.userName,
sophora.replication.password
Username and password for JMS queues. If no username and password is provided, the default values (userName=sophora and password=jms) are taken.
sophora.replication.syncQueueLimitMaximum queue size (in bytes) to be hold in the central memory. When this value is reached, the synchronisation waits until the slave removes messages from the queue so that the threshold is gone below; e.g. 10485760 (= 10 MB)
sophora.replication.maxQueueSizeForAvailableStateNumber of events in the replication queue before the slave is marked as unavailable. (Default: 500)
sophora.replication.restartDateDate to start the synchronisation at. Format is "yyyy.MM.dd HH:mm". Only applicable if sophora.replication.mode=stagingslave or =slave.
sophora.replication.maxVersionsSyncCntLimits the number of document versions per document which are sent to the slave, when the slave synchronises its content with a master. Default: 10000
sophora.replication.maxQueueSizeForAvailableStateA slave which is more events behind the server then this number is considered being unavailable. The default is 500
sophora.replication.ignoreWebsitesComma separated list of websites' UUIDs. Only applicable if sophora.replication.mode=stagingslave. A slave ignores documents located at the given websites. The documents are not transferred to the slaves' repositories.
sophora.documentManager.generateEvenIdsCreate document IDs only with even number: true or false.
sophora.documentManager.generateIdsWithMinusAsSuffixCreate document IDs with minus as idStem suffix: true or false. Default: false
sophora.documentManager.thumbnail.maxWidthMaximum width of thumbnails in the light box. Default 100.
sophora.documentManager.thumbnail.maxHeightMaximum height of thumbnails in the light box. Default 100.
sophora.documentManager.thumbnail.big.maxWidthMaximum width of big thumbnails, e.g. for tooltips of image documents. Default 300.
sophora.documentManager.thumbnail.big.maxHeightMaximum height of big thumbnails, e.g. for tooltips of image documents. Default 300.
sophora.documentManager.childNodeIdPropertyNamesComma-separated list of property names. Upon loading and saving a document all childnodes are scanned if they have such a property defined in their CND but not yet assigned. If so, they are assigned a random long number. This gets done for the properties sophora:childNodeId and sophora-epg:childNodeId regardless of this configuration. Properties of mix-ins can not be set in this way.
sophora.repository.defaultNodeTypesUrl to the sophora.cnd file that should be imported at the server's start-up. An empty string specifies that no CND should be imported.
sophora.repository.updateNodeTypesUrl to the sophora.cnd file that should be imported when updating the server. An empty string specifies that no CND should be imported.
The action is only performed if the defaultNodeTypes property is left blank.
sophora.repository.languageThe repository's language. If a repository is initialised from scratch, the descriptors of basic properties and system documents are set in the given language. Currently, Sophora supports German and English, whereat German is the default language. Possible values for this property are "en" (for English) and "de" (for German).
sophora.configuration.document.externalIdDefines the configuration system document by the external id (default: sophora.configuration.configuration).
sophora.archive.enabledActivates the archival storage if set to true. A second repository is created in the directory repository_archive. Older versions of documents are move to the new repository. Default true.
sophora.archive.activeOnStartupStart the archival storage on the server's start-up? Default is true.
sophora.archive.maxVersionsToGetDefines the maximum number of document versions that should be processed at once. Default 75.
sophora.archive.maxVersionsToRetainDefines the maximum number of document versions that should be kept in the repository. Default 5. If the value is configured to something less then 5 will be used instead.
sophora.archive.checkAllDocumentsForOldVersionsCheck all documents for old versions in a background thread. Default true. When set to false, only currently modified documents are checked.
sophora.archive.deleteOldVersionsAfterDaysWhen the age of an archive version exceeds this amount of days this version will be removed from the archive. A version's age is based on its creation date. The default value is 0, meaning the versions are not removed at all, i.e. kept for good.
sophora.archive.preserveNumberOfVersionsInArchiveRepositoryThe archiving processes keep the number of versions for all documents, regardless how old are the versions. The default value is 0, meaning all versions can be removed by archive worker.
sophora.deleteDocuments.archiveDetermines whether documents are moved to the archive repository or are removed completly. Only relevant if sophora.archive.enabled=true. Default value is true.
sophora.deleteDocuments.blockSizeSets the maximum number of deleted documents that should be processed in one run. Default 300.
sophora.deleteDocuments.minimumAgeDaysSpecifies the minimum age (in days) of deleted documents to be processed by the worker job. The master needs deleted documents to synchronize temporary unavailable slaves. Therefore, the minimum age should not be set too small.
sophora.deleteDocuments.cronTriggerExpressionCron expression defining when to run the job for deleted documents. Default
0 15 * * * ? (Every hour)
sophora.deleteDocuments.delayMsDelay in milliseconds between every single delete operation.
sophora.deleteDocuments.enabledOnStartupStart the actual removal job for deleted documents when starting the server? Possible values are true and false.
sophora.deleteProposals.afterDaysNumber of days deleted proposals are retained in the repository. Deleted proposals are not visible to the user but are needed for synchronising Sophora slaves to the master.
Default: 30
sophora.cleanOfflineFilter.propertiesComma separated list of properties that should be removed from documents when they are set offline. This ensures that future publishing does not conflict with these historical values.
sophora.documentTimingActions.cronTriggerExpressionThis cron expression defines when the server-side timing scripts should be executed (more detailed imformation about the timing action can be found in the documentation about Script managed Sophora extensions). The format and construction of a cron expression is given in the Quartz documentation or use the Cron Maker to generate the expression you need. Some examples: 0 0 3 * * ? (every night at 3a.m.) or 0 0 0/1 * * ? (every hour)
When processing more documents per run as the value in the 'batchLimit' is, the timing script should run only once per day
sophora.documentTimingActions.batchLimitSetting the batch limit for document timing actions (default: 10000). The batch limit is the maximum number of documents which are handled in a single run for each script.
sophora.cache.selectValues.refreshIntervalThe time interval in seconds to run the cache refresh job; e.g. 60. Currently, this functionality is only used for select value fields, whose values are determined via XPath queries on documents (see documentation for administrators)
sophora.autoPublish.usernameName of the user that will appear as the one who triggered a document's automated publication. Needs to be a valid user with corresponding publish permissions upon the node types that might be published this way. Read the user guide's instructions on how users can schedule this automatic process. The default value is admin. However, it is advisable to alter this property to identify corresponding documents easily, e.g. to search for them or find log file entries.
If you set this property to the special value [LAST_MODIFIER] the automated publication is done by the last modifier of the document. In this way the document is published by the same user who has released the document.
There's an internal thread that is run periodically which looks for scheduled publications.
sophora.autoPublish.legacyModeThis must be set to true if automated publishing using the released state is used. This should only be set for legacy installations. If set to false (the default), automated publishing must be done by setting the "Publish at" property and publishing the document.
In previous releases of Sophora, a document could automatically be published at a future date by settings the "Publish at" property and then releasing the document.
The default is false.
sophora.authenticate.user.ignoreUppercaseIf set to true, the server ignores the case of user names during login. In addition, the creation of users with uppercase user name is disallowed. See #Ignoring Case of User Names for additional information.
sophora.authenticate.checkForIncorrectLoginsDetermines whether the server checks for invalid logins (if a user enters his password incorrectly several times) and locks the account after this number of failed login attempts (default: false). This property is accessible via JMX
sophora.authenticate.incorrectLoginCountHow many times a user may enter a wrong password before the account is locked (default: 3). To reset the failed login attempts of a user, you can open the admin area of the user and set the field "Incorrect logins" to 0.
sophora.authenticate.enableUserLoginDetermines whether users are allowed to log in to the server (default: true). Admin users are always allowed to log in, even if this property is set to false.
sophora.solr.embedded.enabledStart a Solr web application embedded in the server process (default: true).
sophora.solr.indexer.enabledAutomatically create indexes and send changed documents to the configured Solr instance (default: true).
sophora.solr.portHttp port of the Solr instance. When an embedded Solr is started the normal http port (sophora.remote.api.http.port) is used. (Default: 1196)
sophora.solr.hostnameHttp host of the Solr instance. When an embedded Solr is started the normal http host name (sophora.remote.api.http.address) is used. (Default: localhost)
sophora.solr.usernameUser name for the basic authentication used by the indexer and by the embedded Solr instance. (Default: solr)
sophora.solr.passwordPassword for the basic authentication used by the indexer and by the embedded Solr instance. (Default: solr)
sophora.solr.iquerysearch.enabledSpecifies whether the solr index should be used as the default search engine for all IQuery search operations. (Default: true)
sophora.lucene.maxClauseCountSet the maximum number of boolean clauses permitted in lucene queries. (Default: 10000)
sophora.mail.smtp.hostThe host name of the SMTP server that is used to send mail. Used for the 'password lost' feature.
sophora.mail.sender.emailThe e-mail address that is used to send mail by the Sophora server. Used for the 'password lost' feature.
sophora.mail.sender.nameThe name that is used as the real name to send mail by the Sophora server. Used for the 'password lost' feature.
sophora.ibf.enabledEnables the feature of "invertible bloom filters" for efficient document count comparisons using the Advanced Admin Dashboard. This will slightly increase the memory usage of the server. (Default: false)

Configuring connected deliveries of a Sophora (Staging) Server

Deprecated: This properties should be replaced with sophora.delivery.externalUrl and sophora.delivery.cache.groups in the sophora.properties of the Sophora web application.

In the staging slave's sophora.properties file can be configured which deliveries are connected. For every delivery the following properties have to be set:

sophora.replication.delivery.NUMBER.url=DELIVERYURL
sophora.replication.delivery.NUMBER.groups=GROUP1,GROUP2...

- NUMBER of the property set, starting with "0". One set for each connected delivery.

  • DELIVERYURL is the URL to access the delivery (including the Tomcat's context path). The deliveries use these URLs to update and synchronise eachother.
  • GROUP1... is the name of one group. Multiple group names are separated with commas. It's recommended to choose simple but meaningful names (i.e. "web").

The following example shows a configuration including three deliveries organised in three groups (see sophora.replication.delivery.<index>.groups in the table above):

sophora.replication.delivery.0.url=http://delivery1:8080
sophora.replication.delivery.0.groups=group1
sophora.replication.delivery.1.url=http://delivery2:8080
sophora.replication.delivery.1.groups=group1,group2,group3
sophora.replication.delivery.2.url=http://delivery3:8080
sophora.replication.delivery.2.groups=

Select Values

This kind of select values is outdated. Now they are modelled as system documents and can be managed from the administrator view within the deskclient. However, the presented form of select values still works, but may be removed in future versions of Sophora.

The server's config directory has a subfolder called selectvalues. This folder contains XML files that define the selectvalues for the clients.

As an example, consider the story_type.xml file:

<?xml version="1.0"encoding="UTF-8"?>
<data>
<entry value="article"label="Article"default="true"/>
<entry value="chatprotocol"label="Chatprotocol"/>
<entry value="chronology"label="Chronology"/>
<entry value="dossier"label="Dossier"/>
<entry value="background"label="Background"/>
<entry value="interview"label="Interview"/>
<entry value="comment"label="Comment"/>
<entry value="newsflash"label="Newsflash"/>
<entry value="portrait"label="Portrait"/>
<entry value="statement"label="Statement"/>
</data>

Repository

The content ist stored in a JCR repository, which itself uses a database system. The default database system is Derby. This is a good choice for testing, development and for the Sophora delivery servers. For production systems we recommend a more feature rich database systems:

  • Mysql 5 (no special edition is required, the free community edition is fine)
  • Oracle 10
  • PostgreSQL

Sophora has no additional requirements regarding the database system. We support all systems which are supported by our JCR implementation Jackrabbit.

To configure a database for an empty repository a custom repository/repository.xml (and repostory_archive/repostory.xml) has to be created before the server is started the first time. To change the configuration for an existing repository, also the files repository/workspaces/default/workspace.xml and repository_archive/workspaces/default/workspace.xml must be changed. Both files are created from the information in the repository.xml when the server is started the first time.

The repository.xml contains two sections of configuration: one for the default workspace and one for the version storage. The workspace section is a template for the workspace.xml. When the workspace.xml already exists, changes in this section has no effect.

Both sections in the repository.xml contains two important configurations: the persistence configuration and the search index configuration. The persistence configuration determines the database type and connection parameters. This is an example for a Mysql connection:

<PersistenceManager class="org.apache.jackrabbit.core.persistence.pool.MySqlPersistenceManager">
	<param name="driver" value="com.mysql.jdbc.Driver"/>
	<param name="url" value="jdbc:mysql://localhost:3306/sophora?autoReconnect=true"/>
	<param name="user" value="sophora"/>
	<param name="password" value="sophora"/>
	<param name="bundleCacheSize" value="256"/>
	...
</PersistenceManager>

This is an example for the search index configuration:

<SearchIndex class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
	...
	<param name="respectDocumentOrder" value="false" />
	<param name="minMergeDocs" value="10000" />
	<param name="mergeFactor" value="5" />
	<param name="cacheSize" value="10000" />
	<param name="initializeHierarchyCache" value="false" />
</SearchIndex>

LevelDB persistence manager

This feature requires server version 2.1.15, 2.2.3, 2.3.0 or higher.

For staging slaves there exists a special very fast and compact persistent manager. It stores the data in a LevelDB, which is a fast key-value storage library written at Google. LevelDB is written in C++ and runs on most unix platforms. It is integrated into Java via JNI.

LevelDB is not suited for binary data, thus it should only be used in combination with a binary data store (see next section).

It can be configured as follows:

<PersistenceManager class="com.subshell.sophora.jackrabbit.LeveldbPersistenceManager">
	<!-- Normaler JackRabbit-Parameter -->
	<param name="bundleCacheSize" value="1024" />
	<!-- LevelDB-Parameter -->
	<param name="cacheSizeMB" value="512" />
</PersistenceManager>

The type of persistence manager (Oracle, Derby, LevelDB) can be set only for an empty repository. Therefore, to create a staging slave with a LevelDB an empty repository must be configured and synchronized completely with a master server.

When using LevelDB the java process requires additional memory from the operating system. The additional memory depends on the size of the repository and can not be controlled by java parameters (e.g. -Xmx4G). Thus, the LevelDB should only be used on hardware with enough memory.

Binary Data Store

This is a paid Sophora add-on. The Data Store has to be activated in the server configuration parameters of the sophora.properties file. Please refer to the Data Store's documentation for details.

The Sophora servers default configuration is sufficient for test systems. However when running a productive Sophora server you should have a look at our recommended settings for productive environments.

Using the Sophora Server

The shell script sophora.sh in cms-directory/sophora/sophora.sh manages the Sophora server. It can be used for the following operations:

  • Starting
  • Stopping
  • Requesting whether the archival storage is active
  • Starting the archival storage
  • Stopping the archival storage
  • Executing scripts
  • Starting the staging
  • Stopping the staging

Except for starting the server and the execution of scripts these operations can also be triggered using the server's JMX interface.

If the JMX interface requires authentication but there is no given pair of username and password within the sophora.properties configuration file, you have to append the credentials, separated by a whitespace character, to the JMX command. In case of doubt, the credentials from the command are preferred and those from the configuration file are overwritten.

Starting

> cd cms-directory/sophora 
> ./sophora.sh -start

Starting the Sophora server as a master or as a fall-back slave is usually quick. Starting it as a staging slave might take a while because the time taken for initializing is dependent on the amount of files the staging slave has to synchronise. When the synchronisation has finished, the following log entry will appear:

[2007-09-15 11:37:21,914, INFO] [ActiveMQ Session Task] {} application.replication.ReplicationSlave:231: got syncFinished

After the start-up, the script sophora.sh will create the file sophora.pid in the cms-directory/sophora/logs directory. This file contains the process ID (PID) of this server instance. When the server has been shut down, this file will be deleted. If the server did not terminate correctly, the file is kept and has to be removed manually before the next start of the Sophora server.

But mind that the script returns before the Sophora Server is able to handle incoming connection request. If you want to ensure that the script does not return before the initialisation is complete use the option -start_and_wait. (JMX must be enabled)

> cd cms-directory/sophora
> ./sophora.sh -start_and_wait

By using the option -start_and_wait the server waits for different conditions in different server modes. On a master server the option waits until the server is started and JMX connections can be established. On slaves and staging slaves the option waits additionally until the server is initially synchronised. 

The option -start_and_wait may be parameterized in order to wait until all solr cores or only specific solr cores are initialised. These parameters are independent of the server mode. Use the parameter solr in order to wait until all solr cores are initialized or a list of solr cores for which the script should wait. The solr cores are specified with the prefix solr_. In the following example the script waits for the solr cores core1, core2 and core3.

> cd cms-directory/sophora
> ./sophora.sh -start_and_wait solr_core1 solr_core2 solr_core3

Another option is -start_clean. The behaviour is similar to the -start option. The only difference is that on startup all previously registered slaves get removed from the list of known slaves.

> cd cms-directory/sophora
> ./sophora.sh -start_clean

The arguments clean and wait may also be specified as arguments of the start option.

> cd cms-directory/sophora
> ./sophora.sh -start wait clean
The staging slave starts only with a connection to a running master. Otherwise the start process is paused until the master is available.

Stopping

There are different ways to stop the Sophora server:

The Sophora server is stopped via JMX. Before the Sophora server stops the archive worker is stopped.

> cd cms-directory/sophora
> ./sophora.sh -stop

The command stop

A kill singnal is sent and the script waits until the Sophora server has been shut down.

> cd cms-directory/sophora
> ./sophora.sh stop

The command kill

A kill -9 singnal is sent to the Sophora server.

> cd cms-directory/sophora
> ./sophora.sh kill

You can check whether the server has been shut down correctly by viewing the PID. It can be obtained from the sophora.pid file before stopping the server. Afterwards, you should write this ID down.

The log file contains the following entry when the server has been stopped successfully:

[2007-09-20 10:04:07,550, INFO] [Thread-14] {} rg.apache.jackrabbit.core.RepositoryImpl:979: Repository has been shut down

The Throttle Mode

The Sophora server has a throttle mode for handling high load. This is described in detail here.

Starting Archival Process

> cd cms-directory/sophora 
> ./sophora.sh  startArchival

Stopping Archival Process

> cd cms-directory/sophora 
> ./sophora.sh stopArchival

Checking the Archival Process

Use the subsequent command to get the archive worker's state (JMX must be enabled):

> cd cms-directory/sophora 
> ./sophora.sh isArchiving

A value between 0 and 2 will be returned, with the following meaning:

  • 0 - The archival storage is currently not active.
  • 1 - The archival storage is active.
  • 2 - The archival storage is disabled and no more documents can be enqueued. Nonetheless, the remaining queue will be completed.

Executing Scripts

You can invoke customised scripts in the following way:

> cd cms-directory/sophora 
> ./sophora.sh runScript skript.bsh

For detailed information about customised scripts and valid script languages, refer to the section Script managed Sophora extensions of the deskclient administrator documentation.

Starting the Staging

> cd cms-directory/sophora 
> ./sophora.sh  startStaging

See section Managing a JMS Connection Between Master And Staging Slave.

Stopping the Staging

> cd cms-directory/sophora 
> ./sophora.sh  stopStaging

See section Managing a JMS Connection Between Master And Staging Slave.

Different types of server modes

The property sophora.replication.mode selects the mode in which the server is started. There are four different modes: none, master, slave ans stagingslave

Server mode none

The server starts in a stand alone mode. No JMS broker is launched and no JMS connection to a broker is established. This mode is useful for test systems without any slave. Another use case is to temporarily disconnect a staging slave from its master. Normal staging slaves do not start without a connection to its master. To start the Sophora server on the delivery server anyway, the server can started in mode none.

Server mode master

This is the normal mode for the main Sophora server. In this mode a JMS broker is launched within the server process. With the help of this JMS broker slaves can connect to the master and receive documents etc.

Server mode slave

This mode is also called replication slave or backup slave. In this mode the slave receives all data from the master. Thus, is a full copy of the master and can be used in cases where the master is not available. When started in slave mode only admin users (users with a role in which the system permission "ADMIN" is set) can log into the server.

This slaves are connected to persistent JMS queue in the master. Thus, no special synchronisation is needed if the slave is temporarily disconnected from the master.

Server mode stagingslave

This mode is used for the Sophora server in the delivery. In this mode the slaves receives only published data from the master. When documents are put offline in the master the corresponding documents in the staging slave are marked as deleted. The version history of documents is not maintented on staging slaces. Documents are only available in their last published Version.

When started the staging slaves always send a Sync-Request to the master. Based on the last modification date of a document in the slave the master sends all new documents (and other data) to the slave.

Getting server mode

You can 'ask' the Sophora server for its server mode. Calling the following URL pattern http://<Hostname of the master>:<HTTP port>/serverMode will return a JSON snippet containing the server mode; for example:

{"serverMode":"MASTER"}

Possible values are: "STAGING_SLAVE", "REPLICATION_SLAVE", "MASTER" and "NONE"

Managing a JMS Connection Between Master And Staging Slave

It is possible to disconnect an active staging slave from the master server. Thus, the staging slave does no longer receive JMS messages and changes in the content are not recognised by the delivery. This behaviour might be desired, if there is maintenance to do at the master and the deliveries should continue their work. Therefore, it is also possible to reconnect the master and its slave. The first thing that happens then is a synchronisation.

Starting/Stopping the Staging via Shell Script

Starting the Staging

> cd cms-directory/sophora 
> ./sophora.sh  startStaging

Stopping the Staging

> cd cms-directory/sophora 
> ./sophora.sh  stopStaging

Starting/Stopping the Staging via HTTP

Calling the following URL pattern http://<Hostname of the slave>:<HTTP port of the slave>/stagingSlaveAdmin/<Operation> enables you to execute the subsequent operations remotely:

  • /stop – Stops the JMS connection to the master.
  • /start – Establishes a JMS connection and synchronises the slaves.

Starting/Stopping the Staging via JMX

The start and stop operations may also be executed via the JMX interface by using the start/stop operations on the com.subshell.sophora/StagingSlaveAdmin MBean. The bean also provides the information whether staging is enabled or not.

Getting All Connected Slaves

You can 'ask' the Sophora server for its connected staging slaves. This function is only available, if the server has been started with the property

sophora.replication.mode=master

Calling the following URL pattern http://<Hostname of the master>:<HTTP port of the master>/replicationSlaves/ will return a comma separated list of connected slaves; for example:

STAGING_SLAVE:server1,server2
REPLICATION_SLAVE:

In this example the server has two connected staging slaves (server1 and server2) but no replication slave.

The returned list of staging slaves contains the configured names that are connected to the master server at that moment.

The returned list of replication slaves contains all replication slaves' names which ever have or had been connected. This means that replication slaves in the returned list are not necessarily conntected to the master right now.

To configure the name of a replication or staging slave, you have to edit the following property in the sophora.properties file:

sophora.replication.slaveHostname=server1

If this property is empty, the hostname is determined automatically. However, the property has to be present.

Shell Script to Determine the Slaves

The following shell script provides the function get_slaves which you can pass a the URL and the desired type of slave. The function returns a list of slave names which you might want to iterate through. An corresponding example is given below.

#!/bin/bash
 
# get_slaves URL TYPE
function get_slaves() {
    wget -q -O - "${1}" |perl -ne "if (/^${2}:.+/) { s/^${2}://; print join(\"\\n\", split /,/); }"
}
 
 
get_slaves "http://tarski.subshell.com:3696/replicationSlaves" "STAGING_SLAVE" |while read slave ; do
    echo "slave: ${slave}"
done

Archival Storage

Problem and Solution

The goal of the archival storage is to remove older versions of documents from the actual repository but to keep them available in an archive so that the repository is disburdened. This is not the same as the archive function within the delivery (which might be "Show me all stories that are 30 days old").

Certain document types (especially homepages) may have thousands of versions. This is not only a matter of disk space but rather a crucial factor when publishing these documents (the more versions a document has the longer it takes to publish it again).

Archived versions of documents are kept in a separate repository (namely repository_archive) so that older versions can be restored easily.

Implementation

This configuration is an example for activating the archive repository:

sophora.archive.enabled=true
sophora.archive.maxVersionsToGet=100
sophora.archive.maxVersionsToRetain=1
sophora.archive.deleteOldVersionsAfterDays=370
sophora.archive.activeOnStartup=true

This repository for the archive content is then located next to the main repository (see the directory structure\).

The Archive Repository

The archive repository only contains archived content, i.e. older versions of documents. It neither contains user information nor node type configurations. As a consequence, the archive repository cannot be turned into a 'real' main repository used by a Sophora server for the daily business. The only data needed about the archived documents' node type are the CND specifications. Without it, it will not be possible to copy nodes from the main repository to the archive. Therefore, the CNDs have to be announced at the master repository and changes in these files have to be committed explicitly.

The Procedure

Old versions of a document are moved from the main repository to the archive repository when the total number of document versions in the main repository exceed the defined threshold. This threshold is given by the property sophora.archive.maxVersionsToRetain. Hence, the main repository retains the latest X versions of the document whereas X equals the previously mentioned threshold.

The actual work of the archival storage is done by the archive worker, which is described below.

Handling References

When a document version is archived it is likely that documents which are referenced by this version are not yet archived; i.e. are not available in the archive repository. Therefore, it is not always possible to store a document's version exactly as is appeared in the main repository.

To solve this problem, each such reference is replaced by an external reference. In more detail: When a document version is read from the main repository and about to be archived, all existing references to other documents are converted into external references. This way no problems occur when saving this version in the archive repository. When an old version is retrieved from the archive repository, these external references need to be resolved again.

Even references to already archived documents will be replaced by external references when saving a version to the archive repository because such references would always point to only the latest version of the referenced document.

Accessing Archived Versions

Displayed Versions in the Deskclients

The document versions shown in the Sophora deskclients are retrieved from both the main repository and the archive repository. Hence, the entire history of a document can be viewed there.

Repository Transactions

To access the archive repository you need a separate Sophora session and your own transaction. Transaction on the main repository are generated by annotations within the source code. Thereby, you cannot specify the transaction manager that should be used. Instead, transactions on the archive repository are defined in the spring configuration using AOP-Pointcut statements.

The Archive Worker

When a document is published, its UUID is added to the first-in-first-out queue of the archive worker who moves each version separately. The archive worker takes all document versions above the defined threshold (see section The Procedure) and copies them to the archive repository.

The ArchiveWorker is separately configured for every Sophora instance (master, havarie, staging). Archival operations are neither replicated to havarie systems nor staged to staging slaves.

Set the maximum number of versions in archive repository for a single document

If you want to set a maximum number of versions to keep for a single document in the archive repository, you can use the following property which is configured in the mixin sophora-mix:document:

- sophora:maxVersionsToKeep (long)

The archive worker checks every 5 minutes for documents which have set that property and deletes all versions of each of these documents which have a higher number. The archive worker removes only versions, when a smaller number for the remaining versions (sophora.archive.maxVersionsToRetain) is configured in the repository.

As an example, if maxVersionsToRetain is set to 5 and the document property maxVersionsToKeep has a value of 2 further 5 versions of the document are retained. The number refers only to the versions in the archive repository.

Removal

Documents that are marked as deleted* can be irrevocably removed after a configured period of time. There are corresponding properties in the sophora.properties file (all start with sophora.deleteDocuments) which determine whether deleted documents should simply be removed from the main repository or, together with their versions, moved to the archive repository.

1. Example: Documents are archived before they are removed from the main repository

sophora.archive.enabled = true
 
sophora.deleteDocuments.enabledOnStartup = true
sophora.deleteDocuments.archive = true
sophora.deleteDocuments.minimumAgeDays = 180

2. Example: Documents are not archived before they are removed from the main repository

sophora.deleteDocuments.enabledOnStartup = true
sophora.deleteDocuments.archive = false

3. Example: The following settings are used to automatically remove documents from staging slaves when they are not used any more (offline, deleted):

sophora.archive.enabled=false
sophora.archive.activeOnStartup=false
 
sophora.deleteDocuments.enabledOnStartup=true
sophora.deleteDocuments.archive=false
sophora.deleteDocuments.blockSize=100
sophora.deleteDocuments.minimumAgeDays=1
sophora.deleteDocuments.intervalMinutes=1

Regardless of the configuration the last live version of a document is never deleted. Unless all versions of a document should be deleted according to the minimum number of days after the versions should be deleted.

Documents can be deleted only if no other published document versions reference it; i.e. only older versions of other documents point to the document to be deleted. However, when you open such an old version that references a deleted document, you will get a warning.

* Documents marked as deleted can be found using the search filter "Only deleted documents" in the Sophora deskclient. In the same search filter you can also search for completely deleted documents. Completely deleted documents are only available in the archive repository.

Live-Workspace

Beside the default workspace, a second so called "live" workspace exists in the repository. The default workspace contains the last available state of every document. Every operation (save, publish, delete, ...) modifies the document in the default workspace. In contrast the live workspace contains the last live state of every published document. Only documents which currently have a live version are included in the live workspace. Every time a document is published, a copy of the document is saved in the live workspace. Every time a document is set offline or deleted, the document is deleted from the live workspace. Thus, the live workspace represents a complete set of currently available live documents. To avoid redundant data in the repository all binary properties are omitted when copying documents from the default to the live workspace.

The live workspace is used for efficient execution of queries dealing with the live state of documents. E.g. the execution of timing action scripts (com.subshell.sophora.api.scripting.ITimingActionScript) which refer to document properties (e.g. broadcast date) of the last live version. If these properties were modified in the last working version without being published, it was impossible to find the relevant documents in former versions of Sophora. Now the queries for timing action scripts are executed in the live workspace and thus refer to the properties of the last live version.

For compatibility reasons, the queries for existing timing action scripts continue to be executed in the default workspace. To enable the new query mode the timing action scripts must implement the interface ITimingActionScriptSearchInLiveWorkspace (from package com.subshell.sophora.api.scripting).

To use the new live workspace in other contexts, the IContentManger interface offers a new method findPublishedDocumentUuids().

For every workspace in general, a separate folder in the directory repository/workspaces exists. So beside the default workspace in repository/workspaces/default now a folder repository/workspaces/live exists. It contains the separate lucene search index and a workspace.xml. If the data is stored in a derby database, also a folder for the database ( repository/workspaces/live/db ) exists. The workspace.xml contains the configuration for the database. If the file does not exist it is derived from the repository.xml in repository/repository.xml.

For every workspace in general, a separate folder in the directory repository/workspaces exists. So beside the default workspace in repository/workspaces/default now a folder repository/workspaces/live exists. It contains the separate lucene search index and a workspace.xml. If the data is stored in a derby database, also a folder for the database  ( repository/workspaces/live/db )  exists. The workspace.xml contains the configuration for the database. If the file does not exist it is created with a default configuration. In this configuration the derby persistence store is used. In most cases the database for the live workspace is much smaller than the database for the default workspace (because of the omitted binary data). In addition the data within the live workspace can be derived from the data in the default and the version workspace (see next chapter). Thus, the requirements for the database are not as high as for the other workspaces (default and version).

Updating the live workspace

After an update to the new Sophora version, the live workspace is created automatically without any data. To fill all currently available live documents into the new workspace an update process has to be started.

To update the live workspace, you need to connect via JMX to the Sophora server. There is a MBean LiveWorkspaceUpdater on which the operations startUpdate and stopUpdate can be called. Moreover you can set an interval (in milliseconds) which is waiting between the copy operations for each document. After the update is started, you can see the number of documents currently in the live workspace (ExistingDocumentsInLiveWorkspaceCount), and the number of published documents in the default workspace (DocumentsWithLiveVersionCount). The attribute RemainingDocumentsCount shows you, how many documents still have to be processed. To determine RemainingDocumentsCount a query called, so monitoring can slow down the server.

Automatic Workflow

Documents in Sophora have different workflow states: for example working, released, published. Changes to these states can be done manually via the rich client or by other programs via the Sophora api. Beside this, there are state changes which are done automatically by the server.

If the server is not running while a date for an automatic workflow step is reached, the document state will be changed two minutes after the server start.

Publish at

It is possible to schedule an automatic publish operation on a document. To accomplish this, two steps have to be done:

  1. Setting the property sophora:publishAt with a date in the future
  2. Publish the document

When the date of the property sophora:publishAt is reached, the server publishes the document without any checks. It is not relevant whether the document had an older published version or had no published version at all. The publish operation is done by the server, however, the value for "published by" is taken from the configured value in the property sophora.autoPublish.username. If this property is set to "[LAST_MODIFIER]" the server takes the "modified by" value from the document and copies it into the "published by" property of the document.

Online to (End date)

It is possible to schedule an automatic offline operation on a document. To accomplish this, two steps have to be done:

  • The property sophora:endDate must be set.
  • The document must be published.

Changing the document afterwards will not stop the automatic offline operation. To change the offline date, the document must be published with the changed (or removed) date. The server always takes the offline date from the last published document version. The offline operation is done by the server, however, the value for "published by" is taken from the configured value in the property sophora.autoOffline.username. If this property is set to "[LAST_MODIFIER]" the server takes the "modified by" value from the document and copies it into the "published by" property of the document.

Cyclic Online/Offline

It is also possible to configure a cyclic online/offline workflow where documents will be set online/offline via a cron expression. To configure such a behaviour, you have to configure two properties:

sophora:cronOff
The cron expression set in this property will be used to set the document offline periodically.

sophora:cronOn
The cron expression set in this property will be used to publish the document periodically.

You can configure these properties by including the mixin sophora-mix:cronControl in your node type. You will also get two additional properties: sophora:cronNextOffDate and sophora:cronNextOnDate. These properties will be used internally and should not be edited manually.

Changing the document afterwards will not stop the automatic online/offline operation. To change the behaviour, the document must be published with the changed (or removed) properties. The server always takes the cron expressions from the last published document version. The offline and publish operations are done by the server, however, the value for "published by" is taken from the configured value in the property sophora.cronOffline.username. If this property is set to "[LAST_MODIFIER]" the server takes the "modified by" value from the document and copies it into the "published by" property of the document.

Enable and configure the Linkchecker

Starting from Sophora version 2.0 the Linkchecker is part of the Sophora server. How to enable and configure the Linkchecker...

Configure the Sophora-Server that HTTPS connections can be established 

In order to open an HTTPS port the property sophora.remote.api.https.enabled must be set to 'true'. Furthermore a keystore file must be created and stored into the config directory which is located below the sophora home directory sophora.home/config. The default filename for the keystore is sophora.keystore, it is possible to adapt the filename by setting the property sophora.remote.api.https.keyStore.  

By default the HTTPS port is set to the RMI port minus 4 (1195), but the port is also configurable with the property sophora.remote.api.https.port. You can specify the passwords of the keystore with the properties sophora.remote.api.https.password and sophora.remote.api.https.keyPassword.

The keystore must at least contains a key pair and may contains a certificate. If the deskclient and other sophora components should verify the authenticity of the server a valid certificate is required. By default sophora components do not verify the certificate. When configured to verify the certificate you must specify a truststore on the client side. Details about the client HTTPS connection conficuration can be found in the  Administration Handbook of the Deskclient.

As the sophora server uses a jetty HTTP server internal, you may be interested in the jetty documentation about the configuration of an HTTPS connections.

For testing you can generate a key pair with the following command:

keytool -genkey -validity 731 -alias jetty -keyalg RSA -keystore sophora.keystore
 
gray:~ sophora$ keytool -genkey -validity 731 -alias jetty -keyalg RSA -keystore sophora.keystore
Geben Sie das Keystore-Passwort ein: 
Geben Sie das Passwort erneut ein:
Wie lautet Ihr Vor- und Nachname?
 [Unknown]:  sophora.customer.com
Wie lautet der Name Ihrer organisatorischen Einheit?
 [Unknown]:     
Wie lautet der Name Ihrer Organisation?
 [Unknown]:  subshell GmbH
Wie lautet der Name Ihrer Stadt oder Gemeinde?
 [Unknown]:  Hamburg
Wie lautet der Name Ihres Bundeslandes oder Ihrer Provinz?
 [Unknown]: 
Wie lautet der Landescode (zwei Buchstaben) für diese Einheit?
 [Unknown]:  de
Ist CN=sophora.customer.com, OU=Unknown, O=subshell GmbH, L=Hamburg, ST=Unknown, C=de richtig?
 [Nein]:  Ja
Geben Sie das Passwort für <mykey> ein.
    (EINGABETASTE, wenn Passwort dasselbe wie für Keystore):
It is important, that the name in the key (in this example sophora.customer.com) matches with the configured hostname (sophora.remote.api.http.address) and with the host name in the connection url (e.g. https://sophora.customer.com:1195).
Modern browsers do not accept DSA keys.

User Authentication with LDAP

The user authentication within Sophora may be carried out by a LDAP server. Therefore, the connection to the LDAP server has to be configured at the Sophora master server, whereas only reading operations are performed.

Insert the following parameters to the sophora.properties of the master server:

sophora.ldap.enabled=true
sophora.ldap.connection.providerUrl=ldap://localhost:10389/dc=subshell,dc=com
sophora.ldap.connection.userDn=uid=admin,ou=system
sophora.ldap.connection.password=secret
sophora.ldap.userSearch.pageSize=500

Beside normal LDAP connections secure LDAP can be used, too. For secure connections the URL starts with ldaps://.. When using secure connection the standard java validation of ssl certificates allows a connection only to trusted servers. This validation is turned off by default. To enable the validation of ssl certificates the property sophora.ssl.disableCertificateCheck can be set to false.

The properties sophora.ldap.connection.userDn and sophora.ldap.connection.password can also be configured via the DeskClient. You can add one property or both properties in the "Configuration" document in the administrator view. The configuration in the properties file (sophora.properties) has precedence over the configuration in the administrator view.

With the property sophora.ldap.userSearch.searchAll it is controlled if a list of all users is queried from the ldap server. By default, only users created in sophora are listed if a document search is restricted by a user. When this property is set to 'true' every 60 minutes a query retrieves a list of all users from the ldap server.

The property sophora.ldap.userSearch.pageSize controls the block size for reading all users from the LDAP server. The configuration of the LDAP server might restrict the number of users the get at once. So in case you have a lot of users and a restricted block size configured for your LDAP server, you might need to adjust this value.

If the LDAP server do not consider upper and lower case for the user name, it is important to activate this property sophora.authenticate.user.ignoreUppercase=true in the Sophora servver. Otherwise user settings will be stored in the server multiple times for one user.

Mode of Operation

When a login is requested from either a Sophora Deskclient or any other Sophora component (e.g. the Sophora Importer), it is checked first whether the login name is an "ordinary" Sophora user. If that is the case, the given password is verified against the internal user's password. By that, it is ensured that administrators can access Sophora even if the LDAP server is (temporarily) unavailable.

If the login name is not a Sophora user or if the password did not match, the LDAP server is searched for this user. To search the LDAP server the subsequent properties are employed:

sophora.ldap.userSearch.searchBase=ou=users
sophora.ldap.userSearch.searchFilter=(uid={0})

Next, if the user has been found, the password is checked. In case of a successful authentication the roles assigned to this user are queried using the following parameters:

sophora.ldap.groupSearch.searchBase=ou=groups
sophora.ldap.groupSearch.searchFilter=(uniqueMember={0})

LDAP roles are directly mapped to roles in Sophora, i.e., only LDAP roles with the prefix "sophora" are respected. This prefix is then removed from all these roles to checked whether there is a correspondent role within Sophora. Thus, only the matching roles are associated with the user at hand.

LDAP users are not visible within the user management of Sophora's administration view whereas roles are exclusively created in Sophora.

When a user who has logged on via the LDAP server modifies documents, her username is automatically set as "last modifier" (or whatever the corresponding properties is called in your repository) at these documents. Anyway, some additional properties about a user are read from the LDAP server (see below). These are used to provide meaningful information to other users when they try to open a document that is locked by a "LDAP user". These properties are

  • Name – LDAP attribute: "cn", "displayName"
  • Email address – LDAP attribute: "mail", "email", "emailAddress"
  • Telefon number – LDAP attribute: "telephonNumber"
User information is read only once – at the moment the users has logged on successfully. Thus, changes in the meaningtime do not apply until the next login.

Caching

The sophora server caches the roles assigned in the ldap server. For every user session the sophora server reads the assigned ldap roles only once. When the assigned roles for an user are changed in the LDAP server, the user has to re-login to set the changes into effect.

Ignoring Case of User Names

If the property sophora.authenticate.user.ignoreUppercase is set to true, the Sophora Server ignores the case of user names during login. This is done by transforming the user name given e.g. to lowercase during login. Therefore, the user names of all Sophora users must be all lowercase. To enforce this, the creation of users containing uppercase characters in the user name is disallowed by the Sophora Server.

Because the user name is transformed to lowercase during login, only users with a lowercase user name can log in to the Sophora Server. Therefore make sure that there are no users containing uppercase characters in the user name.

The Sophora Importer will keep the case of any user names found in Sophora XML import files. For example, if an imported document is added to a proposal section, the user name given in the <sender> element is kept as-is.

JMX Monitoring

The Sophora Server can be monitored using JMX.

Configuration

Monitoring has to be enabled in the sophora.properties file before you can access the information as describes here. The following snippet from the sophora.properties file shows how to activate JMX monitoring with access control:

sophora.rmi.servicePort=1198
sophora.rmi.registryPort=1199

sophora.jmx.enabled=true
sophora.jmx.username=admin
sophora.jmx.password=admin

With this configuration, the Sophora Server can be accessed using the following JMX connection string:

service:jmx:rmi://<HOST>:1198/jndi/rmi://<HOST>:1199/server

The following image shows the connection settings dialog from VisualVM:

Available MBeans

Scripting

The Scripting bean shows information about registered server scripts. This is an MXBean, i.e., information about scripts is returned as open data types from the javax.management.openmbean package.

Attributes of the Scripting bean
NameTypeDescription
DocumentChangeListenerScriptsCompositeData arrayInformation about IScriptDocumentChangeListener scripts.
EventScriptsCompositeData arrayInformation about IEventScript scripts.
RegisteredScriptsCountintNumber of registered scripts.
TimingActionScriptsCompositeData arrayInformation about ITimingActionScript scripts.
ValidationScriptsCompositeData arrayInformation about IValidationScript scripts.

The node of each script contains the following attributes:

NameTypeDescriptionScript type
nameStringName of the scriptAll
scriptDocumentUuidStringUUID of the document with the script source.All
scriptDocumentIdStringSophora-ID of the document with the script source.All
typeStringType of the script.All
executionsCompositeData arrayList of script executions.All except IValidationScript.
lastExecutionCompositeData arrayThe most recent execution of the script.All except IValidationScript.

The node about a script execution contains the following attributes:

NameTypeDescriptionScript type
startDateStringThe time and date when the script execution started.All
durationMslongThe duration of the script execution in milliseconds.All
exceptionTraceStringThe stack trace of any exception that occurred during the script execution.All
processedEventStringA string representation of the processed event.IEventScript
changedDocumentUuidStringThe UUID of the changed documentIScriptDocumentChangeListener
stateChangeStringThe state change of the document.IScriptDocumentChangeListener
processedDocumentUuidStringThe UUID of the processed document.ITimingActionScript

AccessManager

Only one attribute of the AccessManager-Bean is accessable via JMX. It is CheckForIncorrectLogins which can be either set to true or false. Changing this attribute might be neccessary if the admin account (or another user) gets locked out by accident, due to too many failed login attempts.