2011-12-29

Introducing Vista Search

The first version of the Vista Logic and Protocol bundles have been merged into the default branch.  Vista provides a fast full-text indexing method for content stored on the OpenGroupware Coils server.  The Vista component can currently index Contact, Document [meta-data], Enterprise, Note, and Task entities.  All the text fields of these entities are indexed including the values of object properties and company values. Utilizing the power of Vista search is performed by performing HTTP requests to the protocol exposed at "/vista" on the server. 
curl -v -u fred -o output 'http://coils.example.com/vista?term=detroit&term=steel&archived&type=enterprise'
Authenticate as user fred and search for enterprise entities, regardless of archived status, that contain the terms "detroit" and "steel".
The results of the HTTP request are JSON encoded Omphalos representations of the first 100 entities to match the specified criteria.  The default Omphalos detail level is 2056 [Comment + Company Values].  If an alternate detail level is desired this default can be overridden using the "detail" URL parameter.   The following URL parameters are supported:
  • archived - Include entities in the search regardless of there archived status. If no specified archived entities will be excluded from the search results.
  • detail - The Omphalos detail level to use when representing entities in the response.  It is important to recognize that specifying a high detail level will reduce performance.
  • term - Specify a search term.  Any number of terms may be specified.
  • type - Limit the searched entities by type.  If no type is specified all indexed entities are searched regardless of type.  Multiple type parameters may be specified.
For code local to the server searches can be performed using the "vista::search" Logic command:
results = ctx.run_command('vista::search', keywords = [ 'detroit', 'steel' ],
                                                                    entity_types = [ 'enterprise' ],
                                                                    include_archived = True)
Peform a Vista search via Logic for all enterprises, regardless of archived status, that contain the terms "detroit" and "steel".

The recently packaged tool coils-request-index can request the creation or update of an entities search vector.  Normally when an entity is modified a re-index is requested automatically [search vector generation happens in the background and is performed by the coils.vista.index component].  However, if large changes are made to the database or for the initial index generation the use of coils-request-index may expedite the process.
coils-request-index --contact --enterprises --notes --documents --tasks
Request an index/reindex of all the entities of the specified types.
coils-request-index --objectid=10100
Request an index/reindex the entity with objectId 10,100.
If the index is already current for the entity the vector generation request will be discarded.
This new feature does require a schema update to existing OpenGroupware databases.  This schema update will be required for version 0.1.45.
CREATE TABLE vista_vector (
  object_id  INT PRIMARY KEY,
  version    INT DEFAULT 0,
  edition    INT,
  entity     VARCHAR(25) NOT NULL,
  event_date DATE DEFAULT 'TODAY', 
  archived   BOOL DEFAULT FALSE,
  keywords   VARCHAR(128)[],
  vector     tsvector);
CREATE INDEX vista_idx_i0 ON vista_vector (entity);
CREATE INDEX vista_idx_i1 ON vista_vector (event_date);
CREATE INDEX vista_idx_i2 ON vista_vector USING gin(vector);
Vista search utilizes PostgreSQL's powerful tsearch text indexing module.  tsearch provides lexeme oriented indexing - so the server knows, for example, that "rats" and "rat" share the same stem.  Thanks to tsearch Vista searches are not only fast - they're clever!

2011-12-28

Accessing Server Configuration (Defaults)

When developing Logic, either a Command or a Service component, one frequent need is to check the value of a server's configuration directive [a "default" in OpenGroupware speak].  Accessing server configuration is performed using an instance of the ServerDefaultsManager object.  The following code retrieves the value of the CoilsListenAddress address; and if no such default is defined it returns the value "127.0.0.1".
sd = ServerDefaultsManager()
HTTP_HOST = sd.string_for_default('CoilsListenAddress', '127.0.0.1')
The ServerDefaultsManager will cache the server's configuration - so if you are going to be checking a lot of defaults is better to keep the object around rather than repeatedly creating it. The ServerDefaultsManager provides the following methods for retrieving server defaults:
  • bool_for_default(directive) - Values of boolean configuration values are stored as "YES" and "NO" strings.  Actually, any value that isn't "YES" is interpreted as False, which is also the default if no such directive is defined. The value returned by the method is a Python bool type.
  • string_for_default(directive, default value) - Returns the value of the default as string or returns the specified default value if no such directive is defined.
  • integer_for_default(directive, default value) - Returns the value as an integer, or returns the specified default value if no such directive is defined. An exception is raised if the value cannot be represented as an integer.
  • default_as_dict(directive, default value) - Returns the value of the specified directive as a dictionary, or the specified default value if no such directive is defined.  An exception is raised if the value is not a dictionary.
  • default_as_list(directive, default) - Returns the value as a list, or the specified default value if no such directive is defined.  An exception is raised if the value is not a list.
Regarding the actually loading of defaults the defaults manager will load from (or save to) one of two sources.  If the file ".server_defaults.pickle" exists in the document root of the server the defaults are loaded from (and saved to) that Python pickle file; otherwise the defaults are loaded from (and saved to) the OpenSTEP plist file at ".libFoundation/Defaults/NSGlobalDomain.plist".  Use of the OpenSTEP plist file facilitates parallel operation of OpenGroupware Coils with Legacy - both OpenGroupware Coils and OpenGroupware Legacy will operate using the shared configuration. 
One caveat to remember is that OpenSTEP plist files are always stored in the ISO8859-1 encoding.  This includes both the server defaults and user defaults.  Both OpenGroupware Coils and OpenGroupware Legacy always store user defaults in OpenSTEP plist format.  Facilities for parsing and writing OpenSTEP plist files are provided by the coils.foundation module.
If you are developing a remote component that does not have access to the server's document root your component can acquire a copy of the server's configuration by sending a "get_server_defaults" message to the coils.administrator component.  The payload of the response should contain the cluster's GUID [as the "GUID" key] and a copy of all the server defaults [in the "defaults" key).

2011-09-15

Log Rotation & OpenGroupware

OpenGroupware Legacy logs to numerous files in the directory /var/log/opengroupware; OpenGroupware Coils logs to the file /var/log/coils.log.  Managing these log files is an important part of service administrations - bad things happen if the system's /var/log fills up.  Rotating these files can be accomplished using the excellent logrotate facility provided by your LINUX distribution. logrotate reads all the files present in the /etc/logrotate.d directory - each service that requires log rotation can simply create configuration file for itself.  The recommended log rotation configurations for OpenGroupare Legacy and OpenGroupware Coils are as follows:

/var/log/opengroupware/*.log {
    copytruncate
    rotate 5
    daily
    size 10M
    missingok
    notifempty
    sharedscripts
    compress
}
The file /etc/logrotate.d/ogo for rotating OpenGroupware Legacy log files.

/var/log/coils.log {
    copytruncate
    rotate 5
    daily
    size 10M
    missingok
    notifempty
    sharedscripts
    compress
}
The file /etc/logrotate.d/coils for rotating OpenGroupware Coils log files.
You can adjust the "rotate" parameter to change how many log files you want to keep.  The setting of "rotate 5" and "daily" means the log rotator will keep 5 days worth of logs.  The most important option is "copytruncate" - this instructs the log rotator to make a copy of the current log file and then truncate the existing log file rather than moving the file and having a new file created, this allows the OpenGroupware services to continue to use the same file-handle for logging during [and after] the log rotation operation.

2011-09-08

Installing Horde On CentOS6

Progress is being made on first-class integration between OpenGroupware Coils and Horde 4; that is, using Horde 4 as a Web 2.0 / AJAX front-end to the various services provided by OpenGroupware Coils.  This integration is primarily implemented using a custom JSON-RPC protocol bundle designed specifically for integration with Horde.  This article walks through the install to achieve a basic Horde installation.  Subsequent articles will document how to achieve OpenGroupware Coils integration.

This installation procedure assumes:
  • You'll be using a memcache instance for caching.
  • A PostgreSQL database for server meta-data and Horde user preferences;  probably the same PostgreSQL instance you use for your OpenGroupware databse. But for this example we are just setting up a local PostgreSQL instance.  We'll cover installing OpenGroupware Coils on CentOS6 soon.
  • You'll be installing Horde into a virtual host in the folder "/srv/www/vhosts/horde".
  • The horde install will have it's own PEAR repository as separated from the system PEAR repository as possible.
  • The ImageMagick packages will be installed to allow Horde to manipulate images (such as creating thumbnails of image attachments to e-mail).
  • GPG will be installed in order to support encrypted e-mails and notes.
  • We are starting with a clean CentOS6 install of the basic server profile.
  • SELinux is disabled (edit /etc/sysconfig/selinux). In a subsequent article re-enabling SELinux will be documented. 
  • To access this instance remotely the TCP/80 port must be allowed via the host's firewall configuration. If you intend to enable TLS/SSL (secure) access port TCP/443 must also be allowed [on CentOS6 use the system-config-firewall-tui to perform basic firewall configuration configuration, production systems should consider using a more sophisticated tool such as FWBuilder].
Step#1 Install the required packages.
yum install php-devel php-pear make gcc libidn-devel pam-devel pcre-devel postgresql-devel libidn-devel memcached-devel memcached libmemcached zlib-devel cyrus-sasl-devel ImageMagick-devel ImageMagick php-ldap php-intl php-mbstring php-pdo php-pecl-apc php-pgsql php-soap php-tidy php-xml php-xmlrpc php-pecl-memcache libtidy-devel
Step#2 Create the vhost directory and initialize the PEAR package database.
mkdir -p /srv/www/vhosts/horde
pear config-create  /srv/www/vhosts/horde /srv/www/vhosts/horde/pear.conf
pear -c /srv/www/vhosts/horde/pear.conf install pear
Step#3 Add the Horde channel to the PEAR configuration and initialize the Horde role.  The last "run-scripts" command will prompt you for the root of the Horde installation; enter "/srv/www/vhosts/horde".
/srv/www/vhosts/horde/pear/pear -c  /srv/www/vhosts/horde/pear.conf channel-discover pear.horde.org
/srv/www/vhosts/horde/pear/pear -c /srv/www/vhosts/horde/pear.conf  install horde/horde_role
/srv/www/vhosts/horde/pear/pear -c  /srv/www/vhosts/horde/pear.conf run-scripts horde/Horde_Role
Step#4 Set the timezone in your php.ini file Edit the /etc/php.ini to set the date.timezone property to the server's local timezone.  For example: "date.timezone=America/Detroit"

Step#5 Install the re2c package from the DAG repo.  You can optionally add the DAG repo to your system or just pull this one package.  re2c is used by the PHP interpreter to efficiently compile regular expressions.
curl --location -o /tmp/re2c-0.13.5-1.el6.rf.x86_64.rpm http://mandril.creatis.insa-lyon.fr/linux/dag/redhat/el6/en/x86_64/dag/RPMS/re2c-0.13.5-1.el6.rf.x86_64.rpm
rpm -Uvh /tmp/re2c-0.13.5-1.el6.rf.x86_64.rpm
Step#6 As in Step#5 you can add the DAG repo to your system or just pull the two packages necessary to build the geoip module.  Horde will use this to relate hosts to geographic regions.
curl --location -o /tmp/geoip-devel-1.4.6-1.el6.rf.x86_64.rpm http://mandril.creatis.insa-lyon.fr/linux/dag/redhat/el6/en/x86_64/dag/RPMS/geoip-devel-1.4.6-1.el6.rf.x86_64.rpm
curl --location -o /tmp/geoip-1.4.6-1.el6.rf.x86_64.rpm http://mandril.creatis.insa-lyon.fr/linux/dag/redhat/el6/en/x86_64/dag/RPMS/geoip-1.4.6-1.el6.rf.x86_64.rpm
rpm -Uvh  /tmp/geoip-1.4.6-1.el6.rf.x86_64.rpm /tmp/geoip-devel-1.4.6-1.el6.rf.x86_64.rpm
pecl install geoip
echo "extension=geoip.so" > /etc/php.d/geoip.ini
Step#7 Build and install the Imagick extension which will allow Horde to efficiently manipulate images.
pecl install Imagick
echo "extension=imagick.so" > /etc/php.d/imagick.ini
Step#8 Build the tidy module which Horde can use to sanitize HTML content.
pecl install tidy
echo "extension=tidy.so" > /etc/php.d/tidy.ini
Step#9 Build the lzf module which allows Horde to efficiently compress and decompress data.
pecl install lzf
echo "extension=lzf.so" > /etc/php.d/lzf.ini
Step#10 Install the PEAR packages. In this example we manually install several PEAR modules first to verify that PEAR installation is working and because we want to ensure that these optional modules get installed as we will be depending on their existence in this setup.  Particularly the Net_Sieve and Horde_Memcache do not install my default./srv/www/vhosts.
/srv/www/horde/pear/pear -c /srv/www/vhosts/horde/pear.conf install HTTP_Request
/srv/www/vhosts/horde/pear/pear -c /srv/www/vhosts/horde/pear.conf install Net_SMTP
/srv/www/vhosts/horde/pear/pear -c /srv/www/vhosts/horde/pear.conf install Net_Sieve
/srv/www/vhosts/horde/pear/pear -c /srv/www/vhosts/horde/pear.conf install Auth_SASL
/srv/www/vhosts/horde/pear/pear -c /srv/www/vhosts/horde/pear.conf install Net_DNS2
/srv/www/vhosts/horde/pear/pear -c /srv/www/vhosts/horde/pear.conf install horde/horde
/srv/www/vhosts/horde/pear/pear -c /srv/www/vhosts/horde/pear.conf install horde/Horde_Memcache
/srv/www/vhosts/horde/pear/pear -c /srv/www/vhosts/horde/pear.conf install horde/imp
/srv/www/vhosts/horde/pear/pear -c /srv/www/vhosts/horde/pear.conf install horde/turba
/srv/www/vhosts/horde/pear/pear -c /srv/www/vhosts/horde/pear.conf install horde/kronolith
/srv/www/vhosts/horde/pear/pear -c /srv/www/vhosts/horde/pear.conf install horde/mnemo
/srv/www/vhosts/horde/pear/pear -c /srv/www/vhosts/horde/pear.conf install horde/nag
/srv/www/vhosts/horde/pear/pear -c /srv/www/vhosts/horde/pear.conf install horde/ingo
Step#11 Initialize the configuration.
cp  /srv/www/vhosts/horde/config/conf.php.dist  /srv/www/vhosts/horde/config/conf.php
setfacl -m u:apache:rw /srv/www/vhosts/horde/config/conf.php
touch /srv/www/vhosts/horde/imp/config/conf.php
touch /srv/www/vhosts/horde/ingo/config/conf.php
touch /srv/www/vhosts/horde/turba/config/conf.php
touch /srv/www/vhosts/horde/kronolith/config/conf.php
touch /srv/www/vhosts/horde/nag/config/conf.php
touch /srv/www/vhosts/horde/mnemo/config/conf.php
setfacl -m u:apache:rw /srv/www/vhosts/horde/imp/config/conf.php
setfacl -m u:apache:rw /srv/www/vhosts/horde/ingo/config/conf.php
setfacl -m u:apache:rw /srv/www/vhosts/horde/turba/config/conf.php
setfacl -m u:apache:rw /srv/www/vhosts/horde/kronolith/config/conf.php
setfacl -m u:apache:rw /srv/www/vhosts/horde/nag/config/conf.php
setfacl -m u:apache:rw /srv/www/vhosts/horde/mnemo/config/conf.php
Step#12 Enable name based virtual hosting.
Edit the /etc/httpd/conf/httpd.conf file and uncomment the line reading "NameVirtualHost *:80".

Step#13  Create a virtual host entry for the Horde instance. If you have a server-name / domain-name you should substitute that for "horde.example.com". Otherwise if this instance is merely for testing/development adding horde.example.com to your workstation's /etc/hosts file should be sufficient to allow you to access the instance. The domain "example.com" will never be issued as an actual domain (see RFC2606) so it is safe to use for development deployments.  Depending on your site's policies you may want to configure custom logging for this virtual host.
(cat <<EOF
<virtualhost *:80>
    ServerAdmin webmaster@horde.example.com
    ServerName horde.example.com
    ServerAlias horde
    DocumentRoot /srv/www/vhosts/horde
    <directory /srv/www/vhosts/horde>
       Options Indexes Includes FollowSymLinks
       Order allow,deny
       Allow from all
    </directory>
   php_value include_path /srv/www/vhosts/horde/pear/php
   SetEnv PHP_PEAR_SYSCONF_DIR /srv/www/vhosts/horde
</virtualhost>
EOF
) > /etc/httpd/conf.d/x-vhost-horde.conf
Step#14 Start the web server (Apache) and Memcache daemon.
service httpd start
chkconfig httpd on
service memcached start
chkconfig memcached on
Step#15 You should now be able to hit the CentOS6 instance with your web-browser and automatically be logged in as the Horde administrator!  Go the the Administration / Configuration page via the left-hand menu and you should see a list of the installed Horde applications as well as the first level of Horde modules that provide services to those applications (such as "Horde_Alarm", "Horde_Activesync", etc...).  If you don't see those additional Horde modules listed then something went wrong with your PEAR installation; start over and carfully watch the output of the commands for errors or warnings.



Step#16 Generate new configurations for all applications; to perform this function click the "Update all configuration" button. This will fill in the various conf.php files we created in Step#11.

Step#17 Provision a PostgreSQL database for use by the Horde instance. Caution: If you are reusing a PostgreSQL instance from other applications do not perform the "service postgresql initdb" command.
yum install postgresql-server
service postgresql initdb
service postgresql start
sudo  -u postgres createuser --no-password --no-createdb --no-createrole --no-superuser horde4
sudo  -u postgres createdb -E utf-8 -O horde4 horde4
Once the database is provision you need to allow the Horde instance to connect to the database. For simplicity of this example we are connecting to the instance of PostgreSQL on the localhost so we will simply change the configuration to trust local connections. For production deployments at least a password should be configured for the connection. To grant access edit the /var/lib/pgsql/data/pg_hba.conf file and change "ident" to "trust" on the line reading "host    all         all         127.0.0.1/32".  Then restart the PostgreSQL database so that it rereads this file: "service postgresql restart"

Step#18 Configure the database connectivity of the Horde instance. Now that Horde is up and running subsequent configuration is simple. Select Administration / Configuration from the left-hand menu. From the list of applications select "Horde" and then choose the "Database" tab.
  • For database type choose "PostgreSQL"
  • Check the box enabling persistent connections.
  • For "username" enter "horde4"
  • Change protocol to "TCP/IP"
  • For "hostspec" and "port" enter "127.0.0.1" and "5432".
  • For "database" enter "horde4".
  • Leave "charset" as "utf-8" and "splitread" as "Disabled"
  • Once the form is filled in click the "Generate Horde Configuration"

Step#19: Now click the "Update all DB schemas" button; this will initialize the database with the required tables.  Every time applications are updated this button will allow the database schema to be automatically updated. This first time you initialize Horde you should perform this operation until no more database schema error or notices appear - typically this requires performing this operation twice.
Step#20The last steps involved in configuring the base Horde configuration is to enable a caching system to accelerate performance. For this example we are using the memcache service we enabled in Step#14. Navigate to Administration / Configuration, select the Horde application, and then choose the Memcache Server tab.
  • Change the status to "Enabled"
  • For "hostspec" and "port" enter "127.0.0.1" and "11211".  These are the default Memcache configuration parameters.
  • Enable persistent connections by checking the "persistent" box
  • Change to the "Cache System" tab.
  • For the cache system driver select "Use a Memcache server".
  • Click the "Generate Horde Configuration" button.
Your Horde configuration is now configured with database connectivity and an active caching system.  In subsequent articled the procedure for configuring specific applications and enabling OpenGroupware Coils integration.

2011-08-30

Introduction to using coils.workflow.9100

Imagine you have a big and very expensive industrial device that performs a critical diagnostic function.  This device expends a great deal of effort [and energy] to produce a text file on a Windows 2000 SP2 workstation that is tethered to the device through a complex proprietary interface; on that workstation is a proprietary software application that communicates with the device via that complex proprietary interface.  You can't modify the software application, you can't even join the workstation to the corporate domain, and the last thing you want to do is anything that might break this critical, complex, proprietary, expensive, and hopelessly undocumented appliance.  But a text file on a workstation is just data; it isn't information.  You need to move this file onto the network and you want to relate it to other data - thus producing information.

What can a Windows 2000 SP2 workstation that isn't a domain member do?  It can print; or at least it believes that it can print.  Jet-Direct [aka "socket"] printing has been around since man-kind started to replace RS-232 multiplexors and access-servers with packet switched networks.  Of course, with socket printing, the client never really knows what happens on the other end - it just dumps data down the hole and assumes everything proceeds from there.
Enter the new coils.workflow.9100 component that provides a socket listener for accepting TCP data streams into work-flow messages.  Just as the client is oblivious to what happens when it pours a 'print job' into a socket the coils.workflow.9100 is just as unconcerned as to what that client expects to happen.  The upside is that once the data is successfully transitioned into a work-flow engine - anything is possible.

The first step to using the coils.workflow.9100 component to receive your data stream is to configure the interface it listens to. For security reasons, unless otherwise configured, it listens only to the IPv4 loopback interface.  To change this we must change the Coils9100ListenAddress default - by setting this to an IP address the component can be set to listen to just that address, otherwise by setting it to an empty string the component will listen on all interfaces:

$ coils-server-config --directive=Coils9100ListenAddress --value='' Setting the component to listen on all addresses/interfaces. Don't forget to also adjust any related firewall and/or network policy rules that much block the connection or traffic.

Now send a termination signal (signal 15) to the coils.workflow.9100 component and when it restarts it will be listening on all interfaces.

Now on that Windows 2000 SP2 workstation we use the Microsoft "Add Printer" wizard to add a new "Local" printer connected to a new port of type "Standard TCP/IP Port".  The IP address of the "TCP/IP Printer Port" is "coils.example.com" (the host name of your Coils server); the "Port Name" can be any string value you prefer.  Microsoft Windows will be unable to identify our new "device" (possibly because it doesn't exist?) so when prompted for "Additional Port Information" select the standard type "Generic Network Card".  Now that the port is created a printer driver must be assigned.  Select manufacurer "Generic" and printer model "Generc / Text Only".
Next the printer just be named.  Since you are creating the printer to connect to an OIE workflow named "CiscoLoadTester" you will choose to name the printer "OIE:CiscoLoadTester".  Nice and obvious.  Now just click through the remaining dialog panes and the printer is setup. You can now print jobs to the workflow engine!

But, wait, there's a problem.  You have many routes defined - how will OIE know to which route it should deliver the data it receives on the raw socket?  It is just a socket connection after all.  To solve this problem there are two options.

The first option, if the device has a static IP and will only be submitting data to this route, is to set the {http://www.opengroupware.us/oie}clientNetworkAddress property on the route you want to received the data.  If that property is set all connections from the matching network [IP] address will be routed to the route bearing that property.  That is simple but then only one [actually two] clients can submit data to the route, that client can only ever use that one route, and if the client's IP address changes someone has to remember to update the property value.

The second option, and probably the better one in this case, is to define a stream preamble.  A stream  preamble is a string of characters at the beginning of the data stream that the coils.workflow.9100 component will detect and use to route the contents of the stream.  For many applications it isn't possible, or at least not easy, to inject a preamble into the stream.  But since you are using a Microsoft Windows 2000 [or XP, Vista, 7] workstation with a Generic Text printer adding a preamble is simple.  Select the printer in the system's "Printers" dialog, right click and select "Properties". Then under the "Printer Commands" tab in the "Begin Print Job" field enter the text "::{Workflow:CiscoLoadTester}::".  This string will be sent at the beginning of every job queued to the printer.  The coils.workflow.9100 will detect the string "::{Workflow:routeName}::" and attempt to use the specified route.  This preamble will be removed from the stream and the OIE work-flow engine will only receive the remaining contents.

Now when the user finishes generated a report from that enourmous and expensive industrial device that provides only limited network integration - they just print that text file to the "OIE:CiscoLoadTester" printer and the data magically propogates through the rest of the system and to the appropriate applications.

When the component sees your stream preamble it will make a log entry like:
INFO:coils.workflow.9100:Stream preamble specified route named "CiscoLoadTester".
INFO:coils.workflow.9100:Paired stream to route "CiscoLoadTester" via preamble.
Log sample from coils.log when a route is matched via the preamble.
If you don't see that then something is wrong with your setup [possibly after entering the string to the "Begin Print Job" field you didn't hit the dialog's "Apply" button? 

When there is a connection which the component can't match to a work-flow route you'll see log entries like:
DEBUG:coils.workflow.9100:Incoming connection from ('192.168.21.178', 1134)
DEBUG:coils.workflow.9100:Maximum connection transfer is 4294967296b
INFO:coils.workflow.9100:Processing stream from "192.168.21.178"
INFO:coils.workflow.9100:No matching route found for inbound 9100 connection
DEBUG:coils.workflow.9100:Closing connection from ('192.168.21.178', 1134)
Log entries for an unmatched in-bound stream.
In order to be as helpful as possible the coils.workflow.9100 component will still store the contents of the stream in a server-side attachment in case that data was valuable [it won't be lost] and send an e-mail message to the address you configured via the AdministrativeEMailAddress default. This allows you to catch if someone or something is attempting to stream data to OIE but OIE can't determine what it should be doing with that data.

2011-08-28

OpenGroupwware Coils & WMOGAG Coils 0.1.44 Released

OpenGroupware Coils 0.1.44 has been uploaded to Sourceforge and PyPI.  A corresponding version of WMOGAG (Whitemice OpenGroupwareAdministrators Guide) that documents in detail the new features and capabilities is also now available.  Most notable in this release is the completed implementation of the coils.workflow.9100 component which provides a socket listener for accepting data into the work-flow engine from raw data streams.

Bug fixes
  • LOGIC: object::get-notes now complies with documented behavior when asked for notes on an entity that doesn't support notes
  • OIE: findRegularExpressionAction's singleton parameter accepts YES / NO values in the same manner as the toggle values of other actions (it previously accepted only TRUE / FALSE)
  • NETWORK: Fixed the XML-RPC proxy transcoder to transcode input as well as output.
  • OIE: ColumnarXLSReaderFormat handling of defined static columns corrected.
Enhancements
  • LOGIC: Rewinding of the stream when creating a workflow message from a stream of file can be disabled.
  • LOGIC: Implemented route::search;  routes can now be search for using criteria.
  • OIE: New action "chompText" implements common text transforms such as correcting pagination and removing margins.
  • OIE: findRegularExpressionAction now has a "trimValue" paramter to remove whitespace from the value.
  • zOGI: Improved support for managing OIE / workflow entities
  • CORE: The _MESSAGES sub-key in the Omphalos representation of a Process now includes the messages contained by the process
  • OIE: Data can now be submitted to OIE via the new coils.workflow.9100 component which provides a 'raw' socket listener.  Documentation for this component is provided in version 0.1.44 of WMOGAG; include use of stream preambles to match a data stream to a workflow.

2011-08-15

ANNOUNCING: snurtle

snurtle is a command line interface for managing an OpenGroupware Coils server.  Management is performed using the JSON-RPC presentation of the zOGI API. Currently snurtle requires the development edition of Coils; support will be official when OpenGroupware Coils v0.1.44 is released.
Currently implemented actions include:
  • modifying an objects properties and ACLs
  • assigning and un-assigning an entity to a collection
  • listing entities
    • Contacts
    • Enterprises
    • Projects
    • Collections
    • Processes
    • Routes
  • querying entity type by id
For questions concerning snurtle use the Coils-Project e-mail list. A quick-start guide is available at the project site (on SourceForge) and command documentation is included in WMOGAG.

2011-05-27

Using the AttachFS Protocol Bundle

The AttachFS protocol bundle provides a very simple means of accessing attachment and document content as well as creating attachments. The AttachFS protocol bundle is available at the server root “/attachfs”. Access to content via AttachFS requires HTTP authentication to be performed – just as when accessing the server's WebDAV presentation.

Viewing and Downloading Content
$ curl -o file.ods -u awilliam http://127.0.0.1:8080/attachfs/view/iwuV3AmSvh-10100-vdqs37MIYu-1305887683970458@227fd7d5-0c5e-4074-b2f0-7470a8dadddd
$ curl -o file.ods -v -u awilliam http://127.0.0.1:8080/attachfs/view/11124685
The first curl operation requests an attachment from AttachFS; the second curl operation requests the document with objectId#11124685/
Both attachments and documents can be viewed by requesting the id from the “/attachfs/view/” location or downloaded from the “/attachfs/download/” location. The server will determine if the requested content is from an attachment or a document entity by examination of the requested name. For either attachments or documents transformations can be performed using an OSSF chain.

Creating An Attachment

$ curl -u awilliam -T OGo18215321.ods \
    -H 'Content-Type: vnd.oasis.opendocument.spreadsheet'
    http://127.0.0.1:8080/attachfs/10100/OGo18215321.ods
This curl command will create an attachment with the webdav_uid of "OGo18215321.ods" related to the entity 10100 and with the specified MIME type.
Performing an HTTP PUT operation to the “/attachfs/{name}” location will create a new attachment with the specified name. The attachment id will be returned in the “Etag” attribute of the HTTP 201 response to the PUT request. This attachment will not be related to any entity.

An attachment can also be created by performing the HTTP PUT to the path “/attachfs/{objectId}/{name}” which will create an attachment related to the entity whose objectId is specified in the path. The name of that attachment will be preserved.

2011-03-29

New Edition Of WMOGAG-Coils

A new version of the Whitemice Consulting OpenGroupware Administrator's Guide for OpenGroupware Coils has been uploaded to SourceForge. This provides documentation for several of the latest OIE actions that will be available in 0.1.40rc7 as well as the first version of the installation / provisioning chapter.