WO2006060276A9 - System for transactionally deploying content across multiple machines - Google Patents

System for transactionally deploying content across multiple machines

Info

Publication number
WO2006060276A9
WO2006060276A9 PCT/US2005/042732 US2005042732W WO2006060276A9 WO 2006060276 A9 WO2006060276 A9 WO 2006060276A9 US 2005042732 W US2005042732 W US 2005042732W WO 2006060276 A9 WO2006060276 A9 WO 2006060276A9
Authority
WO
WIPO (PCT)
Prior art keywords
deployment
content
deployments
files
target
Prior art date
Application number
PCT/US2005/042732
Other languages
French (fr)
Other versions
WO2006060276A3 (en
WO2006060276A2 (en
Inventor
Vijayakumar Kothandaraman
William G Cuan
Todd Scallan
Original Assignee
Interwoven Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Interwoven Inc filed Critical Interwoven Inc
Publication of WO2006060276A2 publication Critical patent/WO2006060276A2/en
Publication of WO2006060276A9 publication Critical patent/WO2006060276A9/en
Publication of WO2006060276A3 publication Critical patent/WO2006060276A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/75Indicating network or usage conditions on the user display

Definitions

  • the invention relates generally to multi-computer transfer of data. More particularly the invention relates to transactional deployment of data across multiple machines.
  • FTP file transfer protocol
  • a distributed application architecture that includes a user interface for use by an application developer to construct executable application load modules for each system on which an application will reside. Transfer of load modules occurs by way of a conventional FTP (file transfer protocol) application.
  • FTP file transfer protocol
  • FTP is an ideal point-to-point utility, the tool must be configured or customized each time a new target destination or content origination point is identified. This customization can be labor- intensive, and in the long run, it drives up the total cost of ownership of any web-based application relying on FTP for distribution because of the need to manage and maintain each customization individually.
  • RSYNC a utility providing fast, incremental file transfer
  • FTP Fast, incremental file transfer
  • RSYNC is a more sophisticated tool than standard FTP, it lacks built-in encryption and authorization to meet security requirements; it does not provide an easy means of integrating the distribution process with other applications, and it is difficult to scale.
  • a target node's environment may be adjusted before an asset is deployed to that target node.
  • a target deployment adapter, associated with the asset may be selected and deployed with the asset in order to allow the asset to operate in the target node environment.
  • the invention provides a system for transactionally deploying content across multiple machines in a network environment that automates and synchronizes the secure and reliable distribution of code, content and configurations to multiple network locations, thereby allowing controlled provisioning and synchronization of code and content updates to live applications.
  • the invented system employs an open, distributed architecture that includes at least one receiver — a secure listener that processes incoming deployments from one or more senders, and at least one base server — a sender that may also act as a receiver.
  • the invention is able to deploy digital assets managed in any repository or file system to any type of network touch point — file servers, application servers, databases, and edge devices.
  • Use of a base server as a receiver facilitates multi-tiered deployments.
  • the invention additionally includes an administration interface to be installed on a network-accessible system to provide administrative and reporting services and management of the deployment process.
  • an administration interface to be installed on a network-accessible system to provide administrative and reporting services and management of the deployment process.
  • users are enabled to launch, simulate, schedule and monitor activities for any network location at any time.
  • a command line interface and web-services API application programming interface
  • the invention also provides for management of user rights with fine granularity.
  • the invention supports ECD (enterprise content deployment) with fan-out, multi-tiered and routed deployment topologies capable of including hundreds of servers.
  • the invented system also provides a variety of content manipulation features and is optimized to deliver only the delta changes between a source and each target.
  • the invented system is scalable, allowing server farms to be added incrementally as the network infrastructure changes and develops. Each deployment is fully transactional, permitting rollback of the system to its "last known good" state in the case of failure.
  • Figure 1 provides an architecture diagram of a system for transactionally deploying content across multiple machines according to the invention
  • Figure 2 provides a flow diagram of an exemplary network topology from the system of Figure 1 according to the invention
  • Figure 3 shows a stack diagram of an open content deployment protocol incorporated from the system of Figure 1 according to the invention
  • Figure 4 shows a stack diagram of a service-oriented architecture from the system of Figure 1 according to the invention
  • Figure 5 provides a screenshot of a login screen to the system of Figure 1 according to the invention
  • Figure 6 provides a screen shot of an administrative user interface (Ul) to the system of Figure 1 according to the invention
  • Figure 7 provides a screen shot of a user interface for managing deployments from the administrative Ul of Figure 6 according to the invention.
  • Figure 8 provides a screen shot of a user interface for scheduling deployments from the administrative Ul of Figure 6 according to the invention
  • Figure 9 provides a screen shot of a user interface for managing user rights and privileges from the administrative Ul of Figure 6 according to the invention
  • Figure 10 provides a screen shot of a user interface for viewing server status from the administrative Ul of Figure 6 according to the invention
  • Figure 11 provides a screen shot of a user interface for generating and managing reports from the administrative Ul of Figure 6 according to the invention
  • Figure 12 provides a screen shot of a deployment leg report and a manifest report accessible via the user interface of Figure 11 according to the invention
  • Figure 13 provides a shot of a screen for configuring a custom report from the user interface of Figure 11 according to the invention
  • Figure 14 provides a screen shot of a user interface for managing deployment configurations from the administrative Ul of Figure 5 according to the invention.
  • Figure 15 provides a screen shot of a user interface for viewing deployment configurations from the administrative Ul of Figure 5 according to the invention
  • Figure 16 provides a screen shot of a deployment configuration composer from the administrative Ul of Figure ⁇ according to the invention
  • Figure 17 illustrates a parallel deployment from the system of Figure 1 according to the invention
  • Figure 18 illustrates a multi-tiered deployment from the system of Figure 1 according to the invention
  • Figure 19 illustrates a routed deployment from the system of Figure 1 according to the invention
  • Figure 20 illustrates rollback of a parallel deployment after failure according to the invention
  • Figure 21 provides a screenshot of a log view from the administrative Ul of Figure 4 according to the invention.
  • Figure 22 provides a diagram illustrating security measures of the system of Figure 1 according to the invention.
  • Figure 23 provides a screen shot of a user interface for a module for synchronized deployment of database content from the system of Figure 1 according to the invention
  • Figure 24 provides a diagram of an architecture for synchronized deployment of database content from the system of Figure 1 according to the invention.
  • Figure 25 provides screen shots of the user interface for an intelligent delivery module from the system of Figure ;
  • Figure 26 provides a schematic of a control hub for automating provisioning of web application updates according to the invention.
  • the invention provides a system for the cross-platform, transactional transfer of code, content and configurations to multiple machines.
  • the system architecture 100 supports enterprise distribution and automates the deployment process, while providing a high degree of flexibility and administrative control.
  • the system easily integrates with any code or content management system, thus making content distribution a natural extension of established business processes.
  • An open architecture enables the invention to distribute assets managed in any repository or file system to all network touch points found in today's IT environments, including file servers, application servers, databases and edge devices.
  • the system includes one or more senders 101 and one more receivers 102.
  • a base server fulfills the role of sender.
  • the base server is configured to both send content and to receive.
  • the receiver is a secure listener configured to process incoming distribution jobs.
  • An administrative console 103 allows administrative and reporting services and deployment management. At the administrative console, configuration files are created, edited and managed and distributed throughout the enterprise as needed.
  • content 106 refers to any digital asset of an enterprise, including, but not limited to:
  • the distribution architecture 100 retrieves content and facilitates any necessary transformations as it is distributed along the way.
  • the administration console 103 is used to administer distribution modules 109, base servers, and or receivers, residing across the network.
  • the administration console also incorporates a reporting and auditing module 104.
  • Security features 107 including encryption of deployed content and secure connections safeguard an enterprise's digital assets against unauthorized access. Deployment processes are fully transactional, permitting rollback 108 of the system and the content to its last known good state in case a deployment fails. More will be said about each of the above system elements in the paragraphs below.
  • the system facilitates mission-critical processes within IT operations throughout the enterprise including:
  • Web change management Controlled provisioning of code, content and configuration updates to web applications
  • the content deployment system enables IT organizations to:
  • Figure 2 provides a flow diagram of an exemplary network topology from the invented system. Illustrated is a case wherein code and content 201 is being developed in San Francisco. From a hub system in San Francisco 202, the content is distributed to a hub at each of three geographically dispersed sites 203-New York, London and Tokyo. From there, the system replicates updates to regional targets 204. If the assets to be distributed reside in a repository, such as a software configuration or content management system, the system can access the assets directly through the file system or by automatically invoking an appropriate export facility. The distribution environment may be as simple or sophisticated as required by the implementer. Systems may include a mix of, for example, WINDOWS and UNIX platforms, or other computing platforms, such as APPLE or VMS.
  • each system that participates in the distribution environment runs a receiver, for example the regional targets 204, or a base server; the hubs 201 , 202 for example.
  • the system delivers value both to the administrator who sets up and manages the deployment environment and the user who submits deployment jobs.
  • the administrator uses the administrative console, by means of a browser-based Administrative Ul (user interface) 400, described in greater detail below, to assign users and authorizations to the system.
  • an administrator also configures base servers, receivers and deployment rules via XML (extensible markup language) files. A user may then log in and initiate or schedule deployment jobs.
  • the invention employs a connection-oriented protocol that defines how senders and receivers transfer content and communicate status information.
  • the underlying base transport protocol is TCP/IP.
  • the Content deployment protocol sits above the SSL (Secure Sockets Layer) protocol.
  • the open content deployment protocol consists of a series of handshakes and operation directives that are exchanged between the sender and receiver. Once a connect session is established, the sender pushes over the configuration parameters for the deployment. The receiver, with this session information in hand, executes the deployment accordingly.
  • the type of deployment determines the behavior of the receiver and which options and functionality to activate and execute. The three types of deployment are described below. • Content management: In a content management deployment, content from the content management server is pushed over to the receiver. The receiver operates in passive mode;
  • File list In a file list-based deployment, files and/or directories are pushed over to the receiver. The receiver operates in passive mode; and
  • Directory comparison In a directory comparison deployment, the source-side directory information is sent over to the receiver. The receiver compares the source side directory information against the target-side directory information to determine what content needs to be transferred.
  • the invention provides a transactional deployment option that maintains the previous state of the destination directory, in case the currently-initiated deployment, for any reason, fails.
  • the deployed files are staged in the destination directory while a shadow copy of the original content is created for rollback upon failure. This shadow copy is created per content item (file/directory) as the deployment progresses. Thus, if a rollback is required, only the files that have been deployed so far are reverted. The rest of the content remains untouched.
  • the deployments described earlier are considered “push” deployments.
  • the invention also allows reverse deployments, in which content is “pulled” from a remote directory.
  • the invention's authentication options ensure that communication occurs with a known machine in a known manner and that data is received directly from the known machine without interception by a third party.
  • the types of authentication are described below: • Authentication by IP address.
  • the invention can be configured to work with a firewall to ensure that the receiver is communicating with a known machine in a known manner.
  • the receiver can be configured to listen on a specific port for connection attempts by the firewall's specific IP address.
  • the receiver can be further configured to receive content only from a known, trusted source.
  • the invention can be configured to work with SSL certificates to ensure that data is received directly from a known machine without any interception by a third party.
  • An affiliated Certificate Authority (CA) generates public key/private key pairs for both sender and receiver.
  • a service-oriented architecture is designed to enable a loose coupling between interacting software agents.
  • Figure 4 provides a stack diagram of a service-oriented architecture according to the invention.
  • the invention includes a SOAP- (simple object access protocol) based interface that provides programmatic access to the various functions and capabilities of the system.
  • SOAP- simple object access protocol
  • a language-neutral, firewall-friendly API exposes web services, such as starting a deployment or retrieving the status of a deployment, using standard WSDL (web services description language).
  • the invention provides a programmatic infrastructure to broaden applicability of content distribution and web change provisioning within diverse computing environments.
  • Elements of such architecture include:
  • Payload adapters A Base Server can be integrated with an arbitrary source or metadata repository via a payload adapter, which is executed in process at the start of a deployment job. A parameter string or XML- based query is passed to the adapter from the deployment configuration file, described in more detail below. The adapter prepares a payload of files, which is returned to the Base Server, compared with the targets, and deployed or deleted as appropriate.
  • Delivery adapter Deployments may include delivery adapters, which extend the invention to any target application server, protocol or device. After files are deployed to a target Base Server, a delivery adapter is invoked in process with a manifest of deployed files. The adapter then processes the files; for example, by pushing new content into a set of cache servers.
  • Routing adapter Routed deployments (described infra) rely on an adapter for computing multi-tiered delivery routes for deployed files.
  • the invention supports enterprises with multi-tiered deployment topologies consisting of tens or hundreds of servers inside and outside firewalls. Deployments are optimized to distribute only the incremental changes between a source and each target. Servers can be added as initiatives grow, which affords a solution that is readily adapted to a continually changing IT infrastructure. Moreover, cross-version compatibility and the ability to run multiple instances of the invention on a host provide a capability of phased upgrades in production environments
  • Figure 5 provides a screenshot of a login screen 500 to the system of Figure 1.
  • a user is asked to provide a user name 501 , password 502, to select a server from a menu 503, and to specify the user's role 504, for example 'user' or 'administrator.'
  • the preceding description is meant only to be illustrative. Other authentication processes are entirely consistent with the spirit and scope of the invention.
  • a browser-based Ul 600 grants ready access to all major system functions and processes, thus streamlining administration and execution of the distribution process.
  • a command line interface and web services API application programming interface
  • Administrators can take advantage of the browser-based Administrative Ul to set up the environment and monitor activities anywhere at any time. Users also benefit from the Admin Ul, which makes launching, simulating and scheduling distribution jobs quick and easy.
  • the Admin Ul lets administrators and users work from anywhere across the network. A person logging into the system is authenticated using the username and password for the underlying operating system or user directory.
  • the Administrative Ul includes a navigation tree 601 that grants access to a number of functional areas.
  • these functional areas may include, as shown:
  • Servers view and manage base Servers and receivers
  • Reports create and run deployment report queries; view or download reports;
  • User Access assign access rights to base servers and receivers; restrict users' ability to initiate deployments;
  • Database auto-synchronization Configure database auto- synchronization for content from content management systems
  • the main work area of the Administrative Ul displays details and functions related to the functional area selected in the navigation tree. As shown in Figure 6, the 'deployment' functional area 602 is selected. Thus, the main work area of the Ul provides details and functions 604 related to 'deployments.' Arrows 603 allow the user to expand or contract each functional branch of the navigation tree 601 with a mouse-click.
  • ONLINE DEPLOYMENT MANAGEMENT Users can run or simulate deployments directly through the Admin Ul.
  • running a deployment the user initiates a job that is specified based on the particular deployment configuration selected. The process of creating a deployment configuration is described in greater detail below. Simulation is similar to running a deployment, except that no files are transferred, which allows a user to verify the behavior of a deployment configuration quickly without moving potentially many megabytes of data.
  • running a deployment involves expanding 'Deployments' in the navigation tree 601 and selecting 'Start Deployment.
  • ' Starting a deployment includes the following steps:
  • deployments can be organized into groups. The user selects a deployment group from the list; for example, the root level group (/);
  • Deployment The user selects a deployment configuration from a list; for example, 'test.' The deployment configuration is an XML file that specifies deployment rules, such as the source area, the target and filters. Additional parameters may be specified. o Logging Level: either Normal or Verbose. o Deployment Instance: A unique name for a deployment job. o Parameters: Key-value pairs to be used in a deployment that has been configured for parameter substitution.
  • the Ul After clicking the 'Start Deployment' button, the Ul indicates that the deployment has started and provides details; for example, job ID and start time.
  • the user By selecting 'View Deployments' in the navigation tree the user is presented an interface 700 that allows monitoring of the status of the deployment that is currently executing.
  • Selected server 703 the value previously selected, e.g. 'localhost.'
  • View 704 Indicates whether to look at the server as sending or receiving.
  • a base server can be both a sender and receiver, such as a hub node in a multi-tiered deployment or when performing a loop- back deployment.
  • Check boxes 705 These allow the user to filter which jobs to view: for example 'active,' 'completed' and 'scheduled,' including how many days ahead to look.
  • An 'Update' button 506 refreshes the display after making a change.
  • Deployments list 710 A deployments lists displays deployments for the selected server. The list is filtered according to the view and check boxes described above. Clicking the column headings to changes the sort order;
  • Details list 702 Clicking on a Name (ID) in the Deployments list updates the details list with details about each deployment leg. For example, a parallel deployment to New York, London and Tokyo would have three legs.
  • command line tool mentioned above, may be used instead of the Administrative Ul to initiate deployments.
  • JOB SCHEDULER A built-in scheduler 800 allows users to schedule jobs once or at recurring intervals. Jobs may be scheduled, deactivated and reactivated from the Administrative Ul using the job scheduler.
  • jobs may be scheduled, deactivated and reactivated from the Administrative Ul using the job scheduler.
  • the user expands 'Schedules' in the navigation tree 601 and selects 'New Schedule'.
  • the work area of the Ul shows the 'Scheduler' details 801 , as in the 'Deployments' interface. Scheduling includes the following steps:
  • Start Date the user provides a start date by choosing a month, day and year by or by clicking the 'Calendar' button to pop up a calendar 803 and select a date.
  • Specifying Deployment Frequency if once is selected, then the deployment runs run at the date and time specified. Instead select a frequency may be selected, such as daily. Depending upon the frequency selected, it may be necessary to provide additional scheduling details.
  • a 'Deployment Schedules' window (not shown) is accessible via 'View Schedules' in the navigation tree 610. Having functional capabilities analogous to the "View
  • Deployments' window this feature allows the user to edit details, delete jobs, hold or activate a pending job, and refresh the view based on the selected deployment and group.
  • the command line interface may also be used to schedule deployments, deactivate scheduled jobs, delete jobs and retrieve schedule details.
  • the invention includes an SNMP (simple network management protocol) agent to enable monitoring of the system via standard network management tools.
  • SNMP simple network management protocol
  • Administrators obtain status and alerts that are readily correlated to deployment reports and log. Such alerts include, for example failure conditions, such as abnormal termination of a deployment process, failure of a distribution job, and 'hung' jobs that are idle for extended periods of time.
  • failure conditions such as abnormal termination of a deployment process, failure of a distribution job, and 'hung' jobs that are idle for extended periods of time.
  • target references for parallel deployments can be consolidated so that a single change is quickly and accurately applied to many deployments
  • Access' in the navigation tree allows the administrator to authorize a user to access Base Servers or Receivers. For example, the administrator first selects a server name from the pull-down menu 904, and enters or selects the Username of a user to whom access rights are to be assigned 905. Some embodiments include a 'Lookup User' feature (not shown) that allows the administrator to view the available roles for a particular user. The administrator can then select a role for the user and add it. As a result, the user is permitted access to the selected server with the assigned role; • Deployments: The administrator selects 'Deployments' from the navigation tree to authorize a user to initiate specific deployments 903 or access certain deployment groups 901.
  • the administrator chooses a deployment group; for example, the root level group (/). This displays the contents of the deployment group.
  • the administrator chooses a deployment from the deployment list; for example, test. Clicking the 'Add' button 902 authorizes the user to run the deployment.
  • Hierarchical organization of configurations into deployment groups simplifies management and authorization.
  • the administrator can authorize an entire deployment group. The user is then able to run any deployments associated with the group.
  • the administrator can also assign rights by deployment, rather than by server.
  • the Administrative Ul allows the administrator to view and edit server details in much the same way that other parameters have been configured. To see the server list, one expands 'Servers' in the navigation tree and selects 'View Servers.' Each server has a name, address and registry port. For example, a default entry is:
  • An 'Edit' button next to a server allows the administrator to update the server's details.
  • a 'New Server' button allows the administrator to add a new server by specifying server details: 'Name:' a unique name for identifying the server; for example, 'myserver;' 'Address:' a resolvable host name or IP address; and 'Port.' Clicking a 'Save' button saves the new server to the server list. Afterward, the name 'myserver' appears in the list of servers, which is available in other parts of the Administrative Ul; for example, when authorizing deployments.
  • a 'Manage Server' option under 'Servers' allows the administrator to view and upload Baser Server and Receiver configuration files. Viewing a configuration file involves the following:
  • In-Use Config Files Lists the XML-based configuration files in use by the selected server. (Clicking 'Refresh Server' causes the server to re-read its configuration files.)
  • the configuration files could include the following:
  • An 'Upload' button allows uploading of a configuration file to a selected server.
  • Creating a new server group includes steps of:
  • Selecting a 'Manage Server Group' option under the 'Servers' heading in the navigation tree allows the administrator to upload configuration files in batch and refresh servers. For example, selecting a 'Refresh Server Group' button causes each server in the group selected to reread its configuration files. As shown in Figure 10, the status of the operation is displayed. Clicking an 'Uploading/Refreshing Status' button to update the Ul with the latest progress. An 'Upload 'button works similarly by sending a configuration file to the group. The appropriate local node details can be automatically substituted into the uploaded file.
  • Base Servers and Receivers can be configured to publish records of events that are stored in a central database so that the results of deployments are easily determined.
  • the reporting figure is configurable to any JDBC (JAVA Database Connectivity)-compliant database.
  • JDBC Java Database Connectivity
  • the Administrative Ul provides several ways of generating and managing reports. Each method can be accessed by first expanding 'Reports' in the navigation tree.
  • Custom Report Figure 13 shows a user interface 1300 for specifying custom queries. One selects 'Custom Report' in the navigation tree and fills in the search values 1303. The query can be saved query as a
  • results can be viewed in the Ul or saves to a character- delimited file; • SQL Query Report: Supports the creation of free-form queries. On starts by seeding the SQL (Structured Query Language) query with a custom report query 1301. This launches a 'SQL Query Report' window and pre-populates the SELECT statement, which can then be tailored to the specific need.
  • a SQL Query can be saved as a 'Quick Report' or run from the Ul. Results may be viewed directly and saved to a character-delimited file;
  • Custom Report Described in greater detail below, enables the creation of custom queries for deployments that synchronize a content management application with a relational database;
  • a deploy-and-run (DNR) feature provides event triggers for integrating external programs or scripts into the distribution process. External tasks can be launched on source and/or target systems. Triggers may occur before or after the various points upon success or failure. Such points may include:
  • the distribution of files from a content or code management system might utilize the following triggers during the distribution process:
  • the invention takes a flexible, configuration-based approach to deployments configuration due to the tedious, error-prone nature of custom scripting.
  • Deployment configurations specify deployment rules using industry-standard
  • XML XML.
  • a rich vocabulary supports various distribution strategies. The user can create new configurations or they can modify examples provided.
  • Configuration can be written and/or edited using third party editors, or the editing capability provided by the Admin Ul.
  • Uploading a deployment configuration to a Base Server includes the following steps: • Expanding the 'Configuration' menu in the navigation tree and selecting 'Upload Configuration;'
  • the 'Browse' button may be used to locate a deployment configuration file, for example, 'OpenDeployNG/examples/conf-od/fanout.xml;'
  • Viewing the contents of a deployment configuration includes steps of: • Selecting 'View Configurations' in the navigation tree;
  • the contents of the selected deployment configuration are displayed. After selecting an XML element 1503 in the configuration, the element can be expanded or collapsed by clicking the adjacent '+' and '-' signs.
  • a 'new' button 1502 allows the user to create an entirely new configuration.
  • Deployment Configuration Composer 1600 ( Figure 16), which allows the user to edit the configuration according to pre-defined XML rules.
  • the Composer has two views, either 'Errors' or Tree' view. Tabs 1602 are provided for selecting the view.
  • the composer has a navigation tree 1603 for accessing deployment configuration elements.
  • the right side allows the user to update, add and remove elements and attributes.
  • the node 'fileSystem' 1604 (about mid-way down in the tree) contains a 'remoteDiff 1605 element having an 'Area' attribute 1601.
  • the deployment configuration is named 'test.' By selecting 'Deployment Configuration' in the navigation tree and entering a new 'name' value, for example 'mytest,' a new file, distinct from the 'test' file is created. After renaming, clicking the 'save' button 1607 at the top of the work area saves the file. After the file is saved, the XML is displayed in the composer window. After creating a new configuration file, the user can run the deployment as previously described.
  • deployment configurations support the delivery of structured XML content into relational databases.
  • 'DataDeploy Configuration' and 'Schema Mapping' may be selected in the navigation tree for setting up database deployments, which are described in greater detail below.
  • FIG. 17 shows a network topology 1700 for a parallel deployment.
  • the invention may distribute to multiple targets in parallel, which is more efficient than deploying to each target separately. For example, updates can be made transactional to ensure proper synchronization across servers. This is typically necessary for load-balanced or clustered web applications.
  • the invention also implements the concept of logical "nodes" and "replication farms,” which allows the user to dissociate the physical identification of networked systems from deployment configurations and to conveniently organize sets of nodes. So, for example, you can simply deploy to the farm 'Remote Hubs', which consists of the logical nodes 'New York,' 'London,' and Tokyo,' as shown in Figure 17. Replication farms are defined or referenced within each deployment configuration. Target references' let the user make changes to replication farms in a consolidated file, which is faster and more accurate than having to update many individual deployment configurations.
  • the invention supports multi-tiered server topologies in which deployments are chained from one tier to the next.
  • Target systems may be geographically dispersed, with no limit to the number of tiers in the deployment chain.
  • Typical scenarios include: • Distributing to hub nodes that in turn deploy to regional server farms;
  • Chaining means specifying within the deployment configuration which deployment to invoke next on a target system.
  • Figure 18 shows a view 1800 of the Admin that illustrates chaining. For example, to automatically replicate content to European sites after deploying from 'San Francisco' to 'London,' the user simply specifies in the San Francisco deployment configuration which deployment to initiate next on the London system.
  • the Administrative Ul provides hyperlinks that allow point-and-click navigation to downstream deployment status. It is also possible to request the termination of an active deployment via the Administrative Ul or through the command line interface.
  • Routed deployments build on multi-tiered chaining to provide a highly scalable distribution approach, as shown in Figure 19.
  • An exemplary routed deployment from an American region 1901 to a European region 1902 involves separate legs from San Francisco to New York, New York to London, and then London to both Paris and Kunststoff.
  • a route is automatically computed from a pre-defined set of route segments, simply by specifying the source and destination.
  • the reporting database records a unique ID for each routed deployment, which yields a comprehensive audit trail by associating an initiating job with all downstream deployments.
  • Distribution typically entails pushing updates to one or more servers.
  • a reverse deployment briefly mentioned above, pulls files from a server. Examples where reverse deployments are used include:
  • the invention also provides several means by which files can be manipulated during the distribution process. These include:
  • a deployment configuration may specify rules for including and excluding files and directories.
  • the invention supports location-based filters as well as pattern-based filters using regular expressions;
  • Transfer rules A set of rules covers how files should be handled during data transfer. These include deleting target files that are no longer present at the deployment source, preserving access controls, whether to follow symbolic links, timeout values, and retry counts. It is also possible to specify data compression levels when transmitting over bandwidth-constrained networks;
  • Permission and ownership rules The invention provides the capability of specifying rules for manipulating permissions and ownerships of deployed files and directories
  • the invention provides a number of means to help manage the flow of code, content and configurations while maintaining data integrity, synchronization and security throughout the entire distribution process.
  • TRANSACTIONAL DISTRIBUTION ensures data integrity and application reliability by tracking transfer details and rolling back in the case of deployment failure. When an interruption occurs, the invention rolls back the deployment transaction and restores each target to its previous, error-free state. Any deployment can be transactional:
  • Parallel deployment As previously described, the invention provides the capability of making parallel deployments, so that the user can update multiple targets simultaneously, which is more efficient than deploying to each target separately.
  • a transactional parallel deployment 2000 assures that all destinations are kept completely synchronized. This is typically necessary for load- balanced or clustered web applications.
  • a parallel deployment is made transactional by simply setting an attribute in the deployment configuration. Doing so ensures that each parallel deployment leg runs in lockstep: setup, transfer, and commit. If one leg fails, then all targets are rolled back to their original state;
  • Quorum Parallel deployments sometimes require only a subset of targets -known as a quorum - to receive updates for a transaction to be considered successful.
  • the invention allows the user to specify the number of targets to which updates must be successfully delivered before the deployment transaction is committed.
  • the quorum value can range from one to the total number of fan-out targets. If the quorum is met, successful targets are committed and failed ones are rolled back. Thus, each target is always left in a known state - updated or original.
  • Multi-tiered and routed deployments As described above the invention provides approaches for delivering updates to many servers efficiently by deploying to one tier of targets, which in turn deploys to a second tier, and so on. Transactional deployments ensure the integrity of updates across all servers, regardless of location within the network topology. If delivery to any server fails, all servers roll back to their original state. Additionally, the quorum feature may be employed to enforce unique success criteria at each tier. LOGGING
  • a logging facility generates comprehensive logs for archiving and troubleshooting.
  • Log files on sender and receiver systems provide audit trails that can be used to satisfy compliance requirements by proving exactly when and where updates were distributed.
  • a user-configurable threshold limits the maximum size any log file is permitted to attain before it is archived and a new log is started.
  • Log files can be accessed from the file system or viewed directly in the Administrative Ul. Whenever a deployment is run, log files are created for the deployment job. The user can view log files for a particular deployment by selecting 'View Deployment' in the navigation tree. The 'View' pull-down menu provides options for viewing both sender and receiver logs.
  • buttons 2101 are provided for navigating through a log file and for refreshing the display.
  • the invention incorporates a number of features that enable secure distribution of code, content and configurations inside and outside firewalls as shown in the topology diagram 2200 of Figure 22. If desired, data can be encrypted during transfer to protect enterprise-sensitive information. Both strong (up to 168-bit SSL (Secure Sockets Layer)) and weak (40-bit symmetric key file) encryption mechanisms are supported.
  • SSL Secure Sockets Layer
  • weak 40-bit symmetric key file
  • Port authentication ensures that deployment targets communicate with only known senders, either directly or through firewalls.
  • SSL authentication may be enabled for added security.
  • the invention allows the user to restrict the directories to which trusted senders may deploy updates and to limit or prohibit the invocation of Deploy-and-Run tasks, described above, on receiving systems.
  • the invention offers the flexibility to configure the deployment listener port and administration ports.
  • Base Servers and Receivers can run with the level of authority deemed appropriate by the administrator.
  • the invention can run as a root or non-root user in UNIX environments, and as an administrator or non-administrator in WINDOWS environments.
  • each instance is separately configurable. For example, a hosting center may set up a unique Receiver instance for each client that will deploy updates. Each Receiver may have its own encryption setup and may be authorized to update specific file system directories. Additional security measures include the ability to lock down command line invocations to specific hosts, as well as confining user authentication for the Administrative Ul and web services to a specific access service.
  • DAS DATABASE AUTO-SYNCHRONIZATION
  • the invention provides the capability of event-driven synchronized deployments of content from various repositories.
  • the present feature finds particular application in enterprises using content management software, for example TEAMSITE, supplied by INTERWOVEN, INC., Sunnyvale CA, to manage web content.
  • DAS automates deployment of forms-based structured content (known as data content records, or DCRs) into a database for rendering within the development environment.
  • DAS also enables the indexing of extended metadata into a database, which can then be used as the basis for metadata-based publication and expiration of content, described in greater detail below.
  • the Base Server is configured for database deployments to activate DAS.
  • a content management system is preferably also present on the Base Server host.
  • the Administrative Ul can then be used to configure DAS and set up the content management system event server by expanding 'DAS' in the navigation tree.
  • DAS deployment reports are accessible by expanding 'Reports' in the navigation tree and selecting 'DAS Custom Report
  • the invention provides unified distribution architecture that seamlessly combines secure, reliable file distribution with delivery of structured content 2401 to databases that drive business applications, personalization servers, enterprise portals and search engines.
  • a data deployment module 2402 enables a Base Server to securely deliver relational database content via a standard Receiver 2403.
  • Integrated transactional delivery of file and database updates advantageously facilitates synchronized deployment to load-balanced or clustered applications.
  • File and database distribution is managed within a single user interface and reporting subsystem, minimizing the time needed to set up and record deployment activity.
  • the data deployment module is an optional feature that is first activated, for example by running a license enablement utility. Following this, Base Server and Receiver are configured for database deployments. The Administrative Ul can then be used to configure database deployments by expanding 'Configurations' in the navigation tree as shown in Figure 23:
  • 'DataDeploy Configuration 1 allows the user to specify rules for the deployment, for example: type and location of source files, which schema mapping to use, and the target database specification;
  • 'Wrapper configurations' allows storage of configurations with their associated source data.
  • a wrapper configuration is created by selecting 'View Configurations' in the navigation tree, choosing a server and deployment group, clicking a 'DataDeploy Wrapper' check box, and clicking the 'New' button to bring up a Configuration Composer.
  • the invention synchronizes the delivery of XML-based structured content 2401 to the target database with delivery of code and unstructured content files to multiple servers as shown in Figure 24.
  • an intelligent delivery module enables a Base Server to use content attributes for smart distribution and syndication: • Metadata based deployment: Deployment criteria are specified using a metadata query, for example
  • Metadata-based deployment relies on a payload adapter, described above, that supports use of a JDBC-compliant database as the metadata repository.
  • content attributes can provide the basis for metadata-based publication and expiration of content. Users may also write their own payload adapters to integrate with other metadata repositories.
  • Syndication Content reuse through syndicated delivery is supported via an offer/subscription management layer, as shown in Figure 25.
  • An offer 2501 defines the content source and criteria, including the metadata query for identifying relevant assets.
  • a subscription 2502 completes the deployment rules for an offer, including target nodes, schedule, and delivery mechanism, such as FTP or e-mail. Syndication takes advantage of the built-in scheduler, supra; metadata- based deployment, supra, and delivery adapters supra.
  • the Intelligent Delivery module is optional and is activated in the same way as the data deployment module. Offers and subscriptions can then be configured using the Administrative Ul by expanding 'Syndication' in the navigation tree.
  • An offer is a partial deployment configuration that contains details about the source content location and criteria, including a metadata query for determining which content belongs to the offer. For example, an offer might include all financial reports with a metadata tag Type' having a value 'Stock.'
  • a subscription defines a completed set of deployment rules for an offer, including the target recipients, schedule and delivery mechanism. For example, one subscription might FTP assets defined by a particular offer to a set of partners on a weekly basis, while another subscription e-mails the same assets once per month to a group of customers.
  • the web services interface can also be used to expose offers and subscriptions through a third-party application, such as a self-service portal for business partners.
  • the invention streamlines IT operations by providing for secure, automated provisioning of web application updates.
  • a web change management hub adds further control dimensions to the change management process. These include, for example:
  • the web change management hub maintains snapshots of code, content and configuration changes so that deployed web applications can be reverted to any previously known good state; and • Streamlined change process and approvals with workflow automation.
  • the management hub is installed separately on a host with an Base Server.
  • Branches and work areas 2601 provide the organizational structure for managing incoming code, content and configurations.
  • Application files are aggregated into a work area 2601 either by pushing from the respective source repositories 2602 or pulling from within the management hub. .
  • the content deployment system can be used to facilitate the transfer of files into the management hub.
  • the files can be copied into a work area through a file system interface to the management hub, which makes the hub store appear as a drive on WINDOWS systems or a file system mount on UNIX.
  • Automated workflows ensure approvals 2603 and notifications occur at the appropriate points in the change management process.
  • Web change management with the management hub and the content deployment system enables IT operations to realize substantial efficiency gains.
  • change request backlogs that typically plague the web application infrastructure are eliminated and IT departments can be much more responsive to their users.
  • Application developers and business managers benefit from the removal of a critical IT bottleneck, which translates into application changes being deployed to test and production servers quickly and accurately.
  • enterprises can adhere to IT governance requirements by consolidating and enforcing web change processes while also maintaining historical records and representations of all deployed web applications.
  • the invention can be utilized throughout a complex web application environment, regardless of where code and content is managed, or where it is destined to go.
  • the content deployment system can directly integrate with a wide range of source code management, or content management systems.
  • the invention can deliver code or content to any network destination, including at least application servers, web servers, file servers, databases, caches, and CDNs (content delivery network). The result is a distribution solution that can be utilized enterprise-wide.
  • the various modules and functional units described herein are software modules comprising computer-readable code for carrying out the various processes that constitute the invention's unique functionality.
  • the various modules could be programmable hardware modules embodying computer-readable instructions for carrying out the various processes that constitute the invention's unique functionality.
  • the software modules of the preferred embodiment are created using a variety of common languages and protocols, such as JAVA, XML, SOAP, WSDL and SNMP, the invention is not limited to those languages and protocols.
  • the principles of the invention as described herein can be implemented using other languages and protocols. Such are entirely consistent with the spirit and scope of the invention.

Abstract

A system for transactionally deploying content across multiple machines in a network environment automates and synchronizes secure (107) and reliable distribution of digital assets to multiple network locations, allowing controlled provisioning and synchronization of code and content updates to live applications. A distributed architecture includes at least one receiver (102)- a secure listener configured to process incoming distribution jobs-and at least one base server- a sender (101) that may also act as a receiver. An administration interface (103) allows administrative and reporting services and deployment management. Using the administrative interface, users are enabled to launch, simulate, schedule and monitor activities for any network location at any time. The system provides fan-out and multi-tiered deployment topologies expandable to hundreds of servers. Each deployment is fully transactional, permitting rollback (108) of the system to it 'last known good' state in the case of failure.

Description

SYSTEM FOR TRANSACTIONALLY DEPLOYING CONTENT ACROSS MULTIPLE MACHINES
BACKGROUND OF THE INVENTION
TECHNICAL FIELD The invention relates generally to multi-computer transfer of data. More particularly the invention relates to transactional deployment of data across multiple machines.
DESCRIPTION OF RELATED ART
Today's economic pressures are forcing IT management to identify and eliminate redundant, customized, inefficient processes that exist within their businesses. One area of inefficiency that has been discovered in today's increasingly complex web-based application environment is the code and content distribution process.
Hidden within nearly every web application, from development, through QA, to a live, production environment is a set of manually developed distribution processes that are often unsecured, expensive to maintain, and difficult to scale.
Home-grown distribution processes are typically based on FTP (file transfer protocol), a mechanism for exchanging files between servers over the Internet. For example, J. White, Portable and dynamic distributed transaction management method, United States Patent No. 6,115,710 (September 5, 2000) describes a distributed application architecture that includes a user interface for use by an application developer to construct executable application load modules for each system on which an application will reside. Transfer of load modules occurs by way of a conventional FTP (file transfer protocol) application. Although FTP is an ideal point-to-point utility, the tool must be configured or customized each time a new target destination or content origination point is identified. This customization can be labor- intensive, and in the long run, it drives up the total cost of ownership of any web-based application relying on FTP for distribution because of the need to manage and maintain each customization individually.
The Open Source movement has generated a handful of tools to help address the distribution challenge. RSYNC, a utility providing fast, incremental file transfer, is one such tool. While RSYNC is a more sophisticated tool than standard FTP, it lacks built-in encryption and authorization to meet security requirements; it does not provide an easy means of integrating the distribution process with other applications, and it is difficult to scale.
Software products also often come with some minimal set of proprietary distribution tools. One example is the SITESERVER product (MICROSOFT CORPORATION, Redmond WA), which featured CONTENT REPLICATION SERVER (CRS) technology. Technologies such as CRS offer adequate distribution capacity within their respective environments, but they offer little value in distributed, multi-application and multi-platform environments
The art provides additional examples of content distribution. For example, M. Muyres, J. Rigler, J. Williams, Client content management and distribution system, United States Patent Application Pub. No. US 2001/0010046 (filed March 1 , 2001 , published November 28, 2002) describe a digital content vending machine and methods for distributing content to and managing content on the machine. What is described is an e-commerce application wherein single copies of selected digital assets are distributed to single clients in response to a purchase request from a user.
P. Brittenham, D. Davis, D. Lindquist, A. Wesley, Dynamic deployment of services in a computing network, United States Patent Application Pub. No. US 2002/0178254 (filed May 23, 2001 , published November 28, 2002) and P. Brittenham, D. Davis, D. Lindquist, A. Wesley, Dynamic redeployment of services in a computing network, United States Patent Application Pub. No. US 2002/0178244 (filed May 23, 2001 , published November 28, 2002) describe methods and systems for dynamically deploying and redeploying services, such as web services, in a computer network. Conditions such as usage metrics for incoming requests are used to trigger dynamic deployment of web services to locations in the network to improve network efficiency.
C. Pace, P. Pizzomi, D. DeForest, S. Chen, Method and system for deploying an asset over a multi-tiered network, United States Patent Application Pub. No. US 2003/0051066 (filed September 4, 2001 , published March 13, 2003) and C. Pace, P. Pizzorni, D. DeForest, S. Chen, Method and system for deploying an asset over a multi-tiered network, United States Patent Application Pub. No. US 2003/0078958 (filed September 4, 2001 , published April 24, 2003) describe a system for deploying digital assets wherein an asset may represent network and/or application components (e.g., data, objects, applications, program modules, etc.) that may be distributed among the various resources of the network. In one embodiment, a target node's environment may be adjusted before an asset is deployed to that target node. In an alternative embodiment, a target deployment adapter, associated with the asset, may be selected and deployed with the asset in order to allow the asset to operate in the target node environment.
While the above examples describe various aspects of content distribution, none contemplates automated, transactional distribution of any type of digital asset in which assets managed in any type of repository or file system are deployed to all touch points across an enterprise. Furthermore, none contemplate parallel deployments, routed deployments, multi-tiered deployments and reverse deployments. None contemplates security options that include security of communications between machines and data integrity.
Thus, there exists a need in the art for an efficient means of content distribution that disseminates the appropriate content to the right parties and places at the right time. It would be advantageous for such to maintain integrity of the deployed content by keeping content synchronized while distributing from multiple management systems to multiple network destinations in parallel, routed, multi-tiered and reverse deployments. It would also be advantageous if such were scalable and capable of protecting the deployed content from unauthorized access.
SUMMARY OF THE INVENTION
Therefore, in recognition of the above needs, the invention provides a system for transactionally deploying content across multiple machines in a network environment that automates and synchronizes the secure and reliable distribution of code, content and configurations to multiple network locations, thereby allowing controlled provisioning and synchronization of code and content updates to live applications.
The invented system employs an open, distributed architecture that includes at least one receiver — a secure listener that processes incoming deployments from one or more senders, and at least one base server — a sender that may also act as a receiver. By using such architecture, the invention is able to deploy digital assets managed in any repository or file system to any type of network touch point — file servers, application servers, databases, and edge devices. Use of a base server as a receiver facilitates multi-tiered deployments.
The invention additionally includes an administration interface to be installed on a network-accessible system to provide administrative and reporting services and management of the deployment process. Using the administrative interface, users are enabled to launch, simulate, schedule and monitor activities for any network location at any time. A command line interface and web-services API (application programming interface) enable programmatic initiation of system functions. The invention also provides for management of user rights with fine granularity.
The invention supports ECD (enterprise content deployment) with fan-out, multi-tiered and routed deployment topologies capable of including hundreds of servers. The invented system also provides a variety of content manipulation features and is optimized to deliver only the delta changes between a source and each target. The invented system is scalable, allowing server farms to be added incrementally as the network infrastructure changes and develops. Each deployment is fully transactional, permitting rollback of the system to its "last known good" state in the case of failure.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 provides an architecture diagram of a system for transactionally deploying content across multiple machines according to the invention;
Figure 2 provides a flow diagram of an exemplary network topology from the system of Figure 1 according to the invention;
Figure 3 shows a stack diagram of an open content deployment protocol incorporated from the system of Figure 1 according to the invention;
Figure 4 shows a stack diagram of a service-oriented architecture from the system of Figure 1 according to the invention;
Figure 5 provides a screenshot of a login screen to the system of Figure 1 according to the invention;
Figure 6 provides a screen shot of an administrative user interface (Ul) to the system of Figure 1 according to the invention;
Figure 7 provides a screen shot of a user interface for managing deployments from the administrative Ul of Figure 6 according to the invention;
Figure 8 provides a screen shot of a user interface for scheduling deployments from the administrative Ul of Figure 6 according to the invention; Figure 9 provides a screen shot of a user interface for managing user rights and privileges from the administrative Ul of Figure 6 according to the invention;
Figure 10 provides a screen shot of a user interface for viewing server status from the administrative Ul of Figure 6 according to the invention;
Figure 11 provides a screen shot of a user interface for generating and managing reports from the administrative Ul of Figure 6 according to the invention;
Figure 12 provides a screen shot of a deployment leg report and a manifest report accessible via the user interface of Figure 11 according to the invention;
Figure 13 provides a shot of a screen for configuring a custom report from the user interface of Figure 11 according to the invention;
Figure 14 provides a screen shot of a user interface for managing deployment configurations from the administrative Ul of Figure 5 according to the invention.
Figure 15 provides a screen shot of a user interface for viewing deployment configurations from the administrative Ul of Figure 5 according to the invention;
Figure 16 provides a screen shot of a deployment configuration composer from the administrative Ul of Figureδ according to the invention
Figure 17 illustrates a parallel deployment from the system of Figure 1 according to the invention;
Figure 18 illustrates a multi-tiered deployment from the system of Figure 1 according to the invention; Figure 19 illustrates a routed deployment from the system of Figure 1 according to the invention;
Figure 20 illustrates rollback of a parallel deployment after failure according to the invention;
Figure 21 provides a screenshot of a log view from the administrative Ul of Figure 4 according to the invention;
Figure 22 provides a diagram illustrating security measures of the system of Figure 1 according to the invention;
Figure 23 provides a screen shot of a user interface for a module for synchronized deployment of database content from the system of Figure 1 according to the invention;
Figure 24 provides a diagram of an architecture for synchronized deployment of database content from the system of Figure 1 according to the invention;
Figure 25 provides screen shots of the user interface for an intelligent delivery module from the system of Figure ; and
Figure 26 provides a schematic of a control hub for automating provisioning of web application updates according to the invention.
DETAILED DESCRIPTION
The following detailed description should be read with reference to the drawings. The drawings depict illustrative embodiments that are not intended to limit the scope of the invention. The invention provides a system for the cross-platform, transactional transfer of code, content and configurations to multiple machines. As shown in Figure 1 , the system architecture 100 supports enterprise distribution and automates the deployment process, while providing a high degree of flexibility and administrative control. Advantageously, the system easily integrates with any code or content management system, thus making content distribution a natural extension of established business processes. An open architecture enables the invention to distribute assets managed in any repository or file system to all network touch points found in today's IT environments, including file servers, application servers, databases and edge devices. The system includes one or more senders 101 and one more receivers 102. In a preferred embodiment, a base server fulfills the role of sender. The base server is configured to both send content and to receive. The receiver is a secure listener configured to process incoming distribution jobs. An administrative console 103 allows administrative and reporting services and deployment management. At the administrative console, configuration files are created, edited and managed and distributed throughout the enterprise as needed.
As previously described, content 106 refers to any digital asset of an enterprise, including, but not limited to:
• files;
• database data;
• XML; • media; and
• application code.
The distribution architecture 100 retrieves content and facilitates any necessary transformations as it is distributed along the way. The administration console 103 is used to administer distribution modules 109, base servers, and or receivers, residing across the network. The administration console also incorporates a reporting and auditing module 104. Security features 107, including encryption of deployed content and secure connections safeguard an enterprise's digital assets against unauthorized access. Deployment processes are fully transactional, permitting rollback 108 of the system and the content to its last known good state in case a deployment fails. More will be said about each of the above system elements in the paragraphs below.
The system facilitates mission-critical processes within IT operations throughout the enterprise including:
• Enterprise content distribution: Universal distribution of all content types;
• Web change management: Controlled provisioning of code, content and configuration updates to web applications;
• Deployments from Interwoven ECM: Intelligent, automated content distribution, web publishing and syndication enterprise content management (ECM) systems.
The content deployment system enables IT organizations to:
• substantially reduce distribution costs;
• automate inefficient distribution processes; • securely synchronize updates and automate rollback; and
• maintain audit trail for all changes.
Figure 2 provides a flow diagram of an exemplary network topology from the invented system. Illustrated is a case wherein code and content 201 is being developed in San Francisco. From a hub system in San Francisco 202, the content is distributed to a hub at each of three geographically dispersed sites 203-New York, London and Tokyo. From there, the system replicates updates to regional targets 204. If the assets to be distributed reside in a repository, such as a software configuration or content management system, the system can access the assets directly through the file system or by automatically invoking an appropriate export facility. The distribution environment may be as simple or sophisticated as required by the implementer. Systems may include a mix of, for example, WINDOWS and UNIX platforms, or other computing platforms, such as APPLE or VMS. Additionally, the invention is applicable in a wide variety of distributed computing environments, either homogenous or heterogeneous. As shown, each system that participates in the distribution environment runs a receiver, for example the regional targets 204, or a base server; the hubs 201 , 202 for example.
The system delivers value both to the administrator who sets up and manages the deployment environment and the user who submits deployment jobs. The administrator uses the administrative console, by means of a browser-based Administrative Ul (user interface) 400, described in greater detail below, to assign users and authorizations to the system. Also by means of the Admin Ul, an administrator also configures base servers, receivers and deployment rules via XML (extensible markup language) files. A user may then log in and initiate or schedule deployment jobs.
OPEN CONTENT DEPLOYMENT PROTOCOL
In its preferred embodiment, the invention employs a connection-oriented protocol that defines how senders and receivers transfer content and communicate status information. As shown in the protocol stack of Figure 3, the underlying base transport protocol is TCP/IP. When configured for high encryption operation, the Content deployment protocol sits above the SSL (Secure Sockets Layer) protocol.
The open content deployment protocol consists of a series of handshakes and operation directives that are exchanged between the sender and receiver. Once a connect session is established, the sender pushes over the configuration parameters for the deployment. The receiver, with this session information in hand, executes the deployment accordingly.
DEPLOYMENT TYPES
The type of deployment determines the behavior of the receiver and which options and functionality to activate and execute. The three types of deployment are described below. • Content management: In a content management deployment, content from the content management server is pushed over to the receiver. The receiver operates in passive mode;
• File list: In a file list-based deployment, files and/or directories are pushed over to the receiver. The receiver operates in passive mode; and
• Directory comparison. In a directory comparison deployment, the source-side directory information is sent over to the receiver. The receiver compares the source side directory information against the target-side directory information to determine what content needs to be transferred.
TRANSACTIONAL DEPLOYMENT
As described above, the invention provides a transactional deployment option that maintains the previous state of the destination directory, in case the currently-initiated deployment, for any reason, fails. The deployed files are staged in the destination directory while a shadow copy of the original content is created for rollback upon failure. This shadow copy is created per content item (file/directory) as the deployment progresses. Thus, if a rollback is required, only the files that have been deployed so far are reverted. The rest of the content remains untouched.
REVERSE DEPLOYMENT
The deployments described earlier are considered "push" deployments. The invention also allows reverse deployments, in which content is "pulled" from a remote directory.
AUTHENTICATION
The invention's authentication options ensure that communication occurs with a known machine in a known manner and that data is received directly from the known machine without interception by a third party. The types of authentication are described below: • Authentication by IP address. The invention can be configured to work with a firewall to ensure that the receiver is communicating with a known machine in a known manner. The receiver can be configured to listen on a specific port for connection attempts by the firewall's specific IP address. The receiver can be further configured to receive content only from a known, trusted source.
• Authentication by SSL certificate. The invention can be configured to work with SSL certificates to ensure that data is received directly from a known machine without any interception by a third party. An affiliated Certificate Authority (CA) generates public key/private key pairs for both sender and receiver.
SERVICE-ORIENTED ARCHITECTURE
A service-oriented architecture is designed to enable a loose coupling between interacting software agents. Figure 4 provides a stack diagram of a service-oriented architecture according to the invention. The invention includes a SOAP- (simple object access protocol) based interface that provides programmatic access to the various functions and capabilities of the system. A language-neutral, firewall-friendly API exposes web services, such as starting a deployment or retrieving the status of a deployment, using standard WSDL (web services description language).
ADAPTIVE ARCHITECTURE
The invention provides a programmatic infrastructure to broaden applicability of content distribution and web change provisioning within diverse computing environments. Elements of such architecture include:
• Payload adapters: A Base Server can be integrated with an arbitrary source or metadata repository via a payload adapter, which is executed in process at the start of a deployment job. A parameter string or XML- based query is passed to the adapter from the deployment configuration file, described in more detail below. The adapter prepares a payload of files, which is returned to the Base Server, compared with the targets, and deployed or deleted as appropriate. • Delivery adapter: Deployments may include delivery adapters, which extend the invention to any target application server, protocol or device. After files are deployed to a target Base Server, a delivery adapter is invoked in process with a manifest of deployed files. The adapter then processes the files; for example, by pushing new content into a set of cache servers.
• Routing adapter: Routed deployments (described infra) rely on an adapter for computing multi-tiered delivery routes for deployed files.
ENTERPRISE SCALABILITY
The invention supports enterprises with multi-tiered deployment topologies consisting of tens or hundreds of servers inside and outside firewalls. Deployments are optimized to distribute only the incremental changes between a source and each target. Servers can be added as initiatives grow, which affords a solution that is readily adapted to a continually changing IT infrastructure. Moreover, cross-version compatibility and the ability to run multiple instances of the invention on a host provide a capability of phased upgrades in production environments
SERVICES
Figure 5 provides a screenshot of a login screen 500 to the system of Figure 1. In one embodiment of the invention, during authentication, a user is asked to provide a user name 501 , password 502, to select a server from a menu 503, and to specify the user's role 504, for example 'user' or 'administrator.' The preceding description is meant only to be illustrative. Other authentication processes are entirely consistent with the spirit and scope of the invention.
BROWSER-BASED USER INTERFACE (Ul) A browser-based Ul 600 grants ready access to all major system functions and processes, thus streamlining administration and execution of the distribution process. In addition, a command line interface and web services API (application programming interface), described in greater detail below, are also available for authoring automated scripts to initiate system functions. Administrators can take advantage of the browser-based Administrative Ul to set up the environment and monitor activities anywhere at any time. Users also benefit from the Admin Ul, which makes launching, simulating and scheduling distribution jobs quick and easy. The Admin Ul lets administrators and users work from anywhere across the network. A person logging into the system is authenticated using the username and password for the underlying operating system or user directory.
The Administrative Ul includes a navigation tree 601 that grants access to a number of functional areas. In certain embodiments these functional areas may include, as shown:
• Deployments: start deployments; view deployment status and results;
• Schedules: create and view schedule entries for automatic deployments;
• Configurations: view, edit and upload deployment configurations;
• Servers: view and manage base Servers and receivers;
• Reports: create and run deployment report queries; view or download reports; • User Access: assign access rights to base servers and receivers; restrict users' ability to initiate deployments;
• Database auto-synchronization: Configure database auto- synchronization for content from content management systems;
• Syndication: Manage syndicated content offers and subscriptions.
The main work area of the Administrative Ul displays details and functions related to the functional area selected in the navigation tree. As shown in Figure 6, the 'deployment' functional area 602 is selected. Thus, the main work area of the Ul provides details and functions 604 related to 'deployments.' Arrows 603 allow the user to expand or contract each functional branch of the navigation tree 601 with a mouse-click.
ONLINE DEPLOYMENT MANAGEMENT Users can run or simulate deployments directly through the Admin Ul. In running a deployment the user initiates a job that is specified based on the particular deployment configuration selected. The process of creating a deployment configuration is described in greater detail below. Simulation is similar to running a deployment, except that no files are transferred, which allows a user to verify the behavior of a deployment configuration quickly without moving potentially many megabytes of data.
As shown in Figure 6, running a deployment involves expanding 'Deployments' in the navigation tree 601 and selecting 'Start Deployment.' Starting a deployment includes the following steps:
• Choosing the server from which the deployment is to be initiated; for example, localhost (as shown)'
• Selecting a deployment group: deployments can be organized into groups. The user selects a deployment group from the list; for example, the root level group (/);
• Deployment: The user selects a deployment configuration from a list; for example, 'test.' The deployment configuration is an XML file that specifies deployment rules, such as the source area, the target and filters. Additional parameters may be specified. o Logging Level: either Normal or Verbose. o Deployment Instance: A unique name for a deployment job. o Parameters: Key-value pairs to be used in a deployment that has been configured for parameter substitution.
After clicking the 'Start Deployment' button, the Ul indicates that the deployment has started and provides details; for example, job ID and start time. By selecting 'View Deployments' in the navigation tree the user is presented an interface 700 that allows monitoring of the status of the deployment that is currently executing.
• Selected server 703: the value previously selected, e.g. 'localhost.'
• View 704: Indicates whether to look at the server as sending or receiving. (A base server can be both a sender and receiver, such as a hub node in a multi-tiered deployment or when performing a loop- back deployment.)
• Check boxes 705: These allow the user to filter which jobs to view: for example 'active,' 'completed' and 'scheduled,' including how many days ahead to look. An 'Update' button 506 refreshes the display after making a change.
• Deployments list 710: A deployments lists displays deployments for the selected server. The list is filtered according to the view and check boxes described above. Clicking the column headings to changes the sort order;
• Details list 702: Clicking on a Name (ID) in the Deployments list updates the details list with details about each deployment leg. For example, a parallel deployment to New York, London and Tokyo would have three legs.
In addition, the command line tool, mentioned above, may be used instead of the Administrative Ul to initiate deployments.
JOB SCHEDULER A built-in scheduler 800 allows users to schedule jobs once or at recurring intervals. Jobs may be scheduled, deactivated and reactivated from the Administrative Ul using the job scheduler. To schedule a job, the user expands 'Schedules' in the navigation tree 601 and selects 'New Schedule'. The work area of the Ul shows the 'Scheduler' details 801 , as in the 'Deployments' interface. Scheduling includes the following steps:
• Selecting Server, deployment group, deployment: Server, deployment group and deployment are selected as previously described;
• Selecting Start Date: the user provides a start date by choosing a month, day and year by or by clicking the 'Calendar' button to pop up a calendar 803 and select a date.
• Selecting Start Time.
• Naming the Deployment Instance;
• Specifying parameters: specification of unique name:value pairs • Creating a Description: the user can describe the scheduled deployment in greater detail; and
• Specifying Deployment Frequency: if once is selected, then the deployment runs run at the date and time specified. Instead select a frequency may be selected, such as daily. Depending upon the frequency selected, it may be necessary to provide additional scheduling details.
The schedule details are saved by clicking the 'Save' button. A 'Deployment Schedules' window (not shown) is accessible via 'View Schedules' in the navigation tree 610. Having functional capabilities analogous to the "View
Deployments' window, this feature allows the user to edit details, delete jobs, hold or activate a pending job, and refresh the view based on the selected deployment and group. The command line interface may also be used to schedule deployments, deactivate scheduled jobs, delete jobs and retrieve schedule details.
CENTRALIZED ADMINISTRATION
The invention includes an SNMP (simple network management protocol) agent to enable monitoring of the system via standard network management tools.
Administrators obtain status and alerts that are readily correlated to deployment reports and log. Such alerts include, for example failure conditions, such as abnormal termination of a deployment process, failure of a distribution job, and 'hung' jobs that are idle for extended periods of time. By providing flexible and comprehensive feedback within large multi-server networks, the administrator is allowed to track the overall health of the network.
Additional features facilitate large-scale installations of the invention. For example: • deployment groups simplify deployment management and authorization;
• changes can be made to multiple Base Servers or Receivers in a single batch; • routed deployments streamline distribution over hundreds of nodes;
• target references for parallel deployments can be consolidated so that a single change is quickly and accurately applied to many deployments;
• fine-grained user rights allow segmenting of enterprise initiatives; • manage servers remotely;
• generate detailed reports;
• integrate external tasks with deployment jobs;
• connect into diverse IT environments; and
• incorporate provisioning into a service oriented architecture.
USER AUTHENTICATION AND DEPLOYMENT AUTHORIZATION Using the Administrative Ul, an administrator can assign access privileges to users. By expanding 'User Access' (Figure 8) in the navigation tree, the administrator able to define the following controls: • User Authentication: Selecting the 'Servers' sub-entry beneath 'User
Access' in the navigation tree allows the administrator to authorize a user to access Base Servers or Receivers. For example, the administrator first selects a server name from the pull-down menu 904, and enters or selects the Username of a user to whom access rights are to be assigned 905. Some embodiments include a 'Lookup User' feature (not shown) that allows the administrator to view the available roles for a particular user. The administrator can then select a role for the user and add it. As a result, the user is permitted access to the selected server with the assigned role; • Deployments: The administrator selects 'Deployments' from the navigation tree to authorize a user to initiate specific deployments 903 or access certain deployment groups 901. With the server and the deployment user from above selected, the administrator chooses a deployment group; for example, the root level group (/). This displays the contents of the deployment group. Next, the administrator chooses a deployment from the deployment list; for example, test. Clicking the 'Add' button 902 authorizes the user to run the deployment. Hierarchical organization of configurations into deployment groups simplifies management and authorization. Thus, rather than applying access rights to individual deployments, the administrator can authorize an entire deployment group. The user is then able to run any deployments associated with the group. Additionally, as shown in Figure 9, the administrator can also assign rights by deployment, rather than by server.
SERVER MANAGEMENT
The Administrative Ul allows the administrator to view and edit server details in much the same way that other parameters have been configured. To see the server list, one expands 'Servers' in the navigation tree and selects 'View Servers.' Each server has a name, address and registry port. For example, a default entry is:
Table 1
Name Address Port localhost 127.0.0.1 9173
An 'Edit' button next to a server allows the administrator to update the server's details. A 'New Server' button allows the administrator to add a new server by specifying server details: 'Name:' a unique name for identifying the server; for example, 'myserver;' 'Address:' a resolvable host name or IP address; and 'Port.' Clicking a 'Save' button saves the new server to the server list. Afterward, the name 'myserver' appears in the list of servers, which is available in other parts of the Administrative Ul; for example, when authorizing deployments. A 'Manage Server' option, under 'Servers' allows the administrator to view and upload Baser Server and Receiver configuration files. Viewing a configuration file involves the following:
• Select a Server, for example, 'localhost', a 'View Log' button displays a global log for the server;
• In-Use Config Files: Lists the XML-based configuration files in use by the selected server. (Clicking 'Refresh Server' causes the server to re-read its configuration files.)
• All Config Files: Allows viewing and uploading of configuration files. To view a file, one scrolls down and chooses a file from the View File pull-down menu. In an exemplary embodiment of the invention, the configuration files could include the following:
• odbase.xml or odrcvr.xml: Global settings for the server;
• odnodes.xml: Logical nodes used in deployment configurations; • eventReportingConfig.xml: Event publishing settings for deployment reporting;
• jmsConfig.xml: Settings for underlying reporting subsystem
• odsnmp.xml: SNMP agent configuration details; and
• database.xml: Connection details for database deployments.
The above listing of files is only illustrative and is not intended to limit the invention. An 'Upload' button allows uploading of a configuration file to a selected server.
When an installation includes many servers, they can be managed in groups. Selecting a 'View Server Groups' option under the 'Servers' heading in the navigation tree displays a list of server groups and a 'New Server Group' button. Clicking the 'New Server Group' button launches a 'New Server Group' window.
Creating a new server group includes steps of:
• supplying a Server Group Name; for example, myservergroup; • adding servers to the group, for example, localhost is added by first selecting the server name and then clicking the 'Add' button. In this way, the administrator adds as many servers to the group as desired;
• saving the group. In the current embodiment, one saves by clicking the 'Save' button.
Selecting a 'Manage Server Group' option under the 'Servers' heading in the navigation tree allows the administrator to upload configuration files in batch and refresh servers. For example, selecting a 'Refresh Server Group' button causes each server in the group selected to reread its configuration files. As shown in Figure 10, the status of the operation is displayed. Clicking an 'Uploading/Refreshing Status' button to update the Ul with the latest progress. An 'Upload 'button works similarly by sending a configuration file to the group. The appropriate local node details can be automatically substituted into the uploaded file.
REPORTING
Base Servers and Receivers can be configured to publish records of events that are stored in a central database so that the results of deployments are easily determined. In one embodiment of the invention, the reporting figure is configurable to any JDBC (JAVA Database Connectivity)-compliant database. As deployments are run, data related to the deployment are saved to a reports database. As shown in Figure 11 , the Administrative Ul provides several ways of generating and managing reports. Each method can be accessed by first expanding 'Reports' in the navigation tree.
• Quick Report: Offers a quick way to generate reports through predefined queries. One runs a report by selecting 'Quick Reports' 904 in the navigation tree. A report is then chosen from the pull-down menu; for example, 'Deployments in past 24 hours.' Results are shown in the Ul 903. Additionally, the report can be saved as character delimited file by clicking a 'Download Report' button 1101. Hyperlinks 1102 in the report allow the viewer to drill down for more details, including which files were transferred, source and target information, and statistics, as shown in the detail windows 1200 of Figure 12. For example in the 'Name' column of the report, one selects the first test deployment you ran to view deployment leg details 1201. Clicking a 'View Details' button displays specifics about each leg. Selecting a leg e.g. 'labelmylocalhost.MYDEFINITIONNAME' displays a manifest 1202. A
'General Statistics' button on the manifest report displays summary data;
• Custom Report: Figure 13 shows a user interface 1300 for specifying custom queries. One selects 'Custom Report' in the navigation tree and fills in the search values 1303. The query can be saved query as a
Quick Report by clicking 'Save Quick Report' 1303 and naming the query when prompted. Click 'Generate Report' runs the query. As with Quick Reports, results can be viewed in the Ul or saves to a character- delimited file; • SQL Query Report: Supports the creation of free-form queries. On starts by seeding the SQL (Structured Query Language) query with a custom report query 1301. This launches a 'SQL Query Report' window and pre-populates the SELECT statement, which can then be tailored to the specific need. A SQL Query can be saved as a 'Quick Report' or run from the Ul. Results may be viewed directly and saved to a character-delimited file;
• DAS (Database Auto synchronization) Custom Report: Described in greater detail below, enables the creation of custom queries for deployments that synchronize a content management application with a relational database;
• Edit Quick Report: Allows editing and deletion of Quick Report queries. Selecting 'Edit Quick Report' in the navigation tree presents a list of queries. Choosing an item from the list and clicking 'Edit Query' will take the viewer to any of the 'Custom Report' 'SQL Query Report' or 'DAS Custom Report' windows, depending on which was used to create the original query. 'Delete Query' removes a report from the list; and • Report Maintenance. Lets administrators delete old records from the reporting database. Because reporting events can be stored in any JDBC-compliant database and the schema is documented, integrating third party tools or custom report generators is readily accomplished.
DEPLOY-AND-RUN
A deploy-and-run (DNR) feature provides event triggers for integrating external programs or scripts into the distribution process. External tasks can be launched on source and/or target systems. Triggers may occur before or after the various points upon success or failure. Such points may include:
• Deployment job;
• Connection established between sender and each target;
• Data transferred to each target; and
• Connection closed between sender and each target.
For example, the distribution of files from a content or code management system might utilize the following triggers during the distribution process:
• Before deployment job: Promote a collection of files;
• After connection: Shut down a service running on each target system; • After data transfer: Restart the service on each target; and
• After data transfer on failure: Send an e-mail notification
DEPLOYMENT CONFIGURATIONS
The invention takes a flexible, configuration-based approach to deployments configuration due to the tedious, error-prone nature of custom scripting.
Deployment configurations specify deployment rules using industry-standard
XML. A rich vocabulary supports various distribution strategies. The user can create new configurations or they can modify examples provided.
Configuration can be written and/or edited using third party editors, or the editing capability provided by the Admin Ul.
Uploading a deployment configuration to a Base Server, as shown in Figure 14, includes the following steps: • Expanding the 'Configuration' menu in the navigation tree and selecting 'Upload Configuration;'
• Selecting the server, for example 'localhost;'
• Specifying the deployment Group, for example the root level group (/); and
• Specifying the file. As shown, the 'Browse' button may be used to locate a deployment configuration file, for example, 'OpenDeployNG/examples/conf-od/fanout.xml;'
• Checking the box to overwrite a file having the same name on the target server; and
• Clicking the 'Upload' button to copy the file to the selected server.
Viewing the contents of a deployment configuration (Figure 15) includes steps of: • Selecting 'View Configurations' in the navigation tree;
• Selecting a server as above;
• Selecting a deployment group as above; and
• Choosing a deployment from the list as above.
The contents of the selected deployment configuration are displayed. After selecting an XML element 1503 in the configuration, the element can be expanded or collapsed by clicking the adjacent '+' and '-' signs.
One can also edit a deployment configuration from the Ul: With 'View Configurations' selected in the navigation tree: • After selecting server, deployment group and configuration, clicking the 'Edit' button 1501 . A 'new' button 1502 allows the user to create an entirely new configuration.
This brings up the Deployment Configuration Composer 1600 (Figure 16), which allows the user to edit the configuration according to pre-defined XML rules. As shown in Figure 16, the Composer has two views, either 'Errors' or Tree' view. Tabs 1602 are provided for selecting the view. The composer has a navigation tree 1603 for accessing deployment configuration elements. The right side allows the user to update, add and remove elements and attributes. For example, the node 'fileSystem' 1604 (about mid-way down in the tree) contains a 'remoteDiff 1605 element having an 'Area' attribute 1601.
Adding a new element involves steps of:
• Selecting anode in the tree, for example 'file system' 1604 and click the button 'New Source RemoteDiff Type' 1606 to add a second remoteDiff source. The Composer interface distinguishes incomplete elements, for example by highlighting them in red. Clicking the 'Errors' tab furnishes an explanation;
• Selecting the newly added 'Source' in the navigation tree and entering a full pathname of a directory for the 'Area' value; for example, "C:\mydirV in a WINDOWS environment or "/mydir/" in a UNIX environment; • Selecting the 'Path' element in the navigation tree for the newly added
'Source' and entering a name value; here the name value is '.' The path is appended to the 'Area' value during deployments; for example "C:\mydirY" or "/mydir/.."
In order to prevent the original configuration from being overwritten, the newly edited configuration must be renamed before changes are saved. In the present example, the deployment configuration is named 'test.' By selecting 'Deployment Configuration' in the navigation tree and entering a new 'name' value, for example 'mytest,' a new file, distinct from the 'test' file is created. After renaming, clicking the 'save' button 1607 at the top of the work area saves the file. After the file is saved, the XML is displayed in the composer window. After creating a new configuration file, the user can run the deployment as previously described.
The above description of the steps involved in using the Configuration Composer is intended to illustrate the principles of the invention, and is not meant to limit the invention. By relying on the above description, one having an ordinary level of skill in the appropriate art would be enabled to make and use the invention.
In addition to file delivery, deployment configurations support the delivery of structured XML content into relational databases. 'DataDeploy Configuration' and 'Schema Mapping' may be selected in the navigation tree for setting up database deployments, which are described in greater detail below.
PARALLEL DEPLOYMENT TO MULTIPLE TARGETS Figure 17 shows a network topology 1700 for a parallel deployment. The invention may distribute to multiple targets in parallel, which is more efficient than deploying to each target separately. For example, updates can be made transactional to ensure proper synchronization across servers. This is typically necessary for load-balanced or clustered web applications. The invention also implements the concept of logical "nodes" and "replication farms," which allows the user to dissociate the physical identification of networked systems from deployment configurations and to conveniently organize sets of nodes. So, for example, you can simply deploy to the farm 'Remote Hubs', which consists of the logical nodes 'New York,' 'London,' and Tokyo,' as shown in Figure 17. Replication farms are defined or referenced within each deployment configuration. Target references' let the user make changes to replication farms in a consolidated file, which is faster and more accurate than having to update many individual deployment configurations.
MULTI-TIERED DEPLOYMENT CHAINING AND ROUTED DEPLOYMENTS The invention supports multi-tiered server topologies in which deployments are chained from one tier to the next. Target systems may be geographically dispersed, with no limit to the number of tiers in the deployment chain. Typical scenarios include: • Distributing to hub nodes that in turn deploy to regional server farms;
• Hot-standby or disaster recovery sites;
• Conserving system resources when distributing to a large number of targets; and • Optimizing network bandwidth and server utilization across wide area networks.
Chaining means specifying within the deployment configuration which deployment to invoke next on a target system. Figure 18 shows a view 1800 of the Admin that illustrates chaining. For example, to automatically replicate content to European sites after deploying from 'San Francisco' to 'London,' the user simply specifies in the San Francisco deployment configuration which deployment to initiate next on the London system. As shown in Figure 18, the Administrative Ul provides hyperlinks that allow point-and-click navigation to downstream deployment status. It is also possible to request the termination of an active deployment via the Administrative Ul or through the command line interface.
Routed deployments build on multi-tiered chaining to provide a highly scalable distribution approach, as shown in Figure 19. An exemplary routed deployment, from an American region 1901 to a European region 1902 involves separate legs from San Francisco to New York, New York to London, and then London to both Paris and Munich. However, rather than explicitly configuring each hop of the deployment, a route is automatically computed from a pre-defined set of route segments, simply by specifying the source and destination. In this way, the invention provides a means for configuring deployments from source to target nodes without having to worry about the route taken to reach each destination. The reporting database records a unique ID for each routed deployment, which yields a comprehensive audit trail by associating an initiating job with all downstream deployments.
REVERSE DEPLOYMENT
Distribution typically entails pushing updates to one or more servers. Conversely, a reverse deployment, briefly mentioned above, pulls files from a server. Examples where reverse deployments are used include:
• Retrieval of production log files for archiving;
• Copying a production server as the basis for a new web site; and • Transferring assets from outside a firewall when security policies dictate that connections must be initiated from inside the firewall.
MANIPULATION FEATURES The invention also provides several means by which files can be manipulated during the distribution process. These include:
• Filters: A deployment configuration may specify rules for including and excluding files and directories. The invention supports location-based filters as well as pattern-based filters using regular expressions; • Transfer rules: A set of rules covers how files should be handled during data transfer. These include deleting target files that are no longer present at the deployment source, preserving access controls, whether to follow symbolic links, timeout values, and retry counts. It is also possible to specify data compression levels when transmitting over bandwidth-constrained networks;
• Permission and ownership rules: The invention provides the capability of specifying rules for manipulating permissions and ownerships of deployed files and directories;
• Internationalization: The invention honors data encoding specific to the system locale. For example, Japanese files that contain multi-byte characters can be deployed. In addition, XML configuration files may use UTF-8 or local encoding.
CONTENT INTEGRITY AND SECURITY Businesses cannot maintain customer-facing web presences with stale data and incorrect application code. Nor can an enterprise operate effectively with information that is not always current and available. The invention provides a number of means to help manage the flow of code, content and configurations while maintaining data integrity, synchronization and security throughout the entire distribution process.
TRANSACTIONAL DISTRIBUTION The invention ensures data integrity and application reliability by tracking transfer details and rolling back in the case of deployment failure. When an interruption occurs, the invention rolls back the deployment transaction and restores each target to its previous, error-free state. Any deployment can be transactional:
• Parallel deployment: As previously described, the invention provides the capability of making parallel deployments, so that the user can update multiple targets simultaneously, which is more efficient than deploying to each target separately. As shown in Figure 20 A transactional parallel deployment 2000 assures that all destinations are kept completely synchronized. This is typically necessary for load- balanced or clustered web applications. A parallel deployment is made transactional by simply setting an attribute in the deployment configuration. Doing so ensures that each parallel deployment leg runs in lockstep: setup, transfer, and commit. If one leg fails, then all targets are rolled back to their original state;
• Quorum: Parallel deployments sometimes require only a subset of targets -known as a quorum - to receive updates for a transaction to be considered successful. The invention allows the user to specify the number of targets to which updates must be successfully delivered before the deployment transaction is committed. The quorum value can range from one to the total number of fan-out targets. If the quorum is met, successful targets are committed and failed ones are rolled back. Thus, each target is always left in a known state - updated or original.
• Multi-tiered and routed deployments: As described above the invention provides approaches for delivering updates to many servers efficiently by deploying to one tier of targets, which in turn deploys to a second tier, and so on. Transactional deployments ensure the integrity of updates across all servers, regardless of location within the network topology. If delivery to any server fails, all servers roll back to their original state. Additionally, the quorum feature may be employed to enforce unique success criteria at each tier. LOGGING
A logging facility generates comprehensive logs for archiving and troubleshooting. Log files on sender and receiver systems provide audit trails that can be used to satisfy compliance requirements by proving exactly when and where updates were distributed. A user-configurable threshold limits the maximum size any log file is permitted to attain before it is archived and a new log is started. Log files can be accessed from the file system or viewed directly in the Administrative Ul. Whenever a deployment is run, log files are created for the deployment job. The user can view log files for a particular deployment by selecting 'View Deployment' in the navigation tree. The 'View' pull-down menu provides options for viewing both sender and receiver logs. Clicking a 'View Log' button next to an item in the Deployments list (upper half of the Administrative Ul) opens a 'macro' log. The Log Viewer 2100 (Figure 21) shows entries pertaining to all jobs run with the particular deployment name. Clicking 'X' at the top right corner of the window closes the Log Viewer. Additionally, a 'View Log' button also appears next to each item in the Details list. Each corresponds to a 'micro' deployment log, which contains details for a pairing of source-target servers. At the bottom of the Log Viewer, buttons 2101 are provided for navigating through a log file and for refreshing the display.
SECURE DISTRIBUTION
The invention incorporates a number of features that enable secure distribution of code, content and configurations inside and outside firewalls as shown in the topology diagram 2200 of Figure 22. If desired, data can be encrypted during transfer to protect enterprise-sensitive information. Both strong (up to 168-bit SSL (Secure Sockets Layer)) and weak (40-bit symmetric key file) encryption mechanisms are supported.
Port authentication ensures that deployment targets communicate with only known senders, either directly or through firewalls. SSL authentication may be enabled for added security. Furthermore, the invention allows the user to restrict the directories to which trusted senders may deploy updates and to limit or prohibit the invocation of Deploy-and-Run tasks, described above, on receiving systems.
The invention offers the flexibility to configure the deployment listener port and administration ports. For example, Base Servers and Receivers can run with the level of authority deemed appropriate by the administrator. Thus, the invention can run as a root or non-root user in UNIX environments, and as an administrator or non-administrator in WINDOWS environments. When running multiple instances of the invention on the same host, each instance is separately configurable. For example, a hosting center may set up a unique Receiver instance for each client that will deploy updates. Each Receiver may have its own encryption setup and may be authorized to update specific file system directories. Additional security measures include the ability to lock down command line invocations to specific hosts, as well as confining user authentication for the Administrative Ul and web services to a specific access service.
DATABASE AUTO-SYNCHRONIZATION (DAS) the invention provides the capability of event-driven synchronized deployments of content from various repositories. The present feature finds particular application in enterprises using content management software, for example TEAMSITE, supplied by INTERWOVEN, INC., Sunnyvale CA, to manage web content. DAS automates deployment of forms-based structured content (known as data content records, or DCRs) into a database for rendering within the development environment. DAS also enables the indexing of extended metadata into a database, which can then be used as the basis for metadata-based publication and expiration of content, described in greater detail below.
Preferably, the Base Server is configured for database deployments to activate DAS. A content management system is preferably also present on the Base Server host. The Administrative Ul can then be used to configure DAS and set up the content management system event server by expanding 'DAS' in the navigation tree. In addition, DAS deployment reports are accessible by expanding 'Reports' in the navigation tree and selecting 'DAS Custom Report
DEPLOYMENT OF DATABASE CONTENT As described above, the invention provides unified distribution architecture that seamlessly combines secure, reliable file distribution with delivery of structured content 2401 to databases that drive business applications, personalization servers, enterprise portals and search engines. In a further embodiment of the invention, a data deployment module 2402 enables a Base Server to securely deliver relational database content via a standard Receiver 2403. Integrated transactional delivery of file and database updates advantageously facilitates synchronized deployment to load-balanced or clustered applications. File and database distribution is managed within a single user interface and reporting subsystem, minimizing the time needed to set up and record deployment activity.
The data deployment module is an optional feature that is first activated, for example by running a license enablement utility. Following this, Base Server and Receiver are configured for database deployments. The Administrative Ul can then be used to configure database deployments by expanding 'Configurations' in the navigation tree as shown in Figure 23:
• 'DataDeploy Configuration1 allows the user to specify rules for the deployment, for example: type and location of source files, which schema mapping to use, and the target database specification;
• 'Schema Mapping' allows the user to map the DTD (document type definition) for the source data to the target database schema;
• 'Wrapper configurations' allows storage of configurations with their associated source data. A wrapper configuration is created by selecting 'View Configurations' in the navigation tree, choosing a server and deployment group, clicking a 'DataDeploy Wrapper' check box, and clicking the 'New' button to bring up a Configuration Composer. When configured for combined database and file deployment, the invention synchronizes the delivery of XML-based structured content 2401 to the target database with delivery of code and unstructured content files to multiple servers as shown in Figure 24.
INTELLIGENT DELIVERY MODULE
In a further embodiment of the invention, an intelligent delivery module enables a Base Server to use content attributes for smart distribution and syndication: • Metadata based deployment: Deployment criteria are specified using a metadata query, for example
Table 2
Action Example
Deploy based on metadata Deploy files with archive=true and importance=high Publish based on date Deploy reports with publication date after 6/6/04 Delete based on expiration Expire web pages with expiration date after 7/7/04 Deliver to specific audiences Deploy reports tagged with group-FundSubscribers
Metadata-based deployment relies on a payload adapter, described above, that supports use of a JDBC-compliant database as the metadata repository. When combined with DAS or the data deployment moduls, described above, content attributes can provide the basis for metadata-based publication and expiration of content. Users may also write their own payload adapters to integrate with other metadata repositories.
• Syndication: Content reuse through syndicated delivery is supported via an offer/subscription management layer, as shown in Figure 25. An offer 2501 defines the content source and criteria, including the metadata query for identifying relevant assets. A subscription 2502 completes the deployment rules for an offer, including target nodes, schedule, and delivery mechanism, such as FTP or e-mail. Syndication takes advantage of the built-in scheduler, supra; metadata- based deployment, supra, and delivery adapters supra. The Intelligent Delivery module is optional and is activated in the same way as the data deployment module. Offers and subscriptions can then be configured using the Administrative Ul by expanding 'Syndication' in the navigation tree.
Creating an offer is similar to creation of a deployment configuration. An offer is a partial deployment configuration that contains details about the source content location and criteria, including a metadata query for determining which content belongs to the offer. For example, an offer might include all financial reports with a metadata tag Type' having a value 'Stock.' A subscription defines a completed set of deployment rules for an offer, including the target recipients, schedule and delivery mechanism. For example, one subscription might FTP assets defined by a particular offer to a set of partners on a weekly basis, while another subscription e-mails the same assets once per month to a group of customers. In addition to using the Administrative Ul or command line tool, the web services interface can also be used to expose offers and subscriptions through a third-party application, such as a self-service portal for business partners.
WEB CHANGE MANAGEMENT HUB
The invention streamlines IT operations by providing for secure, automated provisioning of web application updates. A web change management hub adds further control dimensions to the change management process. These include, for example:
• the ability to aggregate and coordinate change sets from multiple code and content repositories. This allows IT operations to control how changes are administered without forcing application developers to alter their tools or release processes;
• Immediate rollback of changes using built-in version control. The web change management hub maintains snapshots of code, content and configuration changes so that deployed web applications can be reverted to any previously known good state; and • Streamlined change process and approvals with workflow automation.
The management hub is installed separately on a host with an Base Server. Branches and work areas 2601 provide the organizational structure for managing incoming code, content and configurations. Application files are aggregated into a work area 2601 either by pushing from the respective source repositories 2602 or pulling from within the management hub. . The content deployment system can be used to facilitate the transfer of files into the management hub. Alternatively, the files can be copied into a work area through a file system interface to the management hub, which makes the hub store appear as a drive on WINDOWS systems or a file system mount on UNIX. Automated workflows ensure approvals 2603 and notifications occur at the appropriate points in the change management process. When code, content and configuration files are staged and ready to be provisioned, the new application version is saved as an 'edition' 2604 and the content deployment system 2605 deploys the incremental changes to the target servers 2606. Editions provide an efficient mechanism for recording the state of target servers at any point in time. As a result, the content deployment system can instantly roll back an application to a previous state by simply deploying the files that differ between the previous and current editions. Furthermore, editions help satisfy audit requirements by preserving accurate snapshots of web applications as they existed at specific points in time.
Web change management with the management hub and the content deployment system enables IT operations to realize substantial efficiency gains. As a result, change request backlogs that typically plague the web application infrastructure are eliminated and IT departments can be much more responsive to their users. Application developers and business managers benefit from the removal of a critical IT bottleneck, which translates into application changes being deployed to test and production servers quickly and accurately. And perhaps most importantly, enterprises can adhere to IT governance requirements by consolidating and enforcing web change processes while also maintaining historical records and representations of all deployed web applications.
The invention can be utilized throughout a complex web application environment, regardless of where code and content is managed, or where it is destined to go. The content deployment system can directly integrate with a wide range of source code management, or content management systems. In addition, the invention can deliver code or content to any network destination, including at least application servers, web servers, file servers, databases, caches, and CDNs (content delivery network). The result is a distribution solution that can be utilized enterprise-wide.
One skilled in the art will appreciate that, in a preferred embodiment, the various modules and functional units described herein are software modules comprising computer-readable code for carrying out the various processes that constitute the invention's unique functionality. In another embodiment, the various modules could be programmable hardware modules embodying computer-readable instructions for carrying out the various processes that constitute the invention's unique functionality. While the software modules of the preferred embodiment are created using a variety of common languages and protocols, such as JAVA, XML, SOAP, WSDL and SNMP, the invention is not limited to those languages and protocols. The principles of the invention as described herein can be implemented using other languages and protocols. Such are entirely consistent with the spirit and scope of the invention.
Although the invention has been described herein with reference to certain preferred embodiments, one skilled in the art will readily appreciate that other applications may be substituted for those set forth herein without departing from the spirit and scope of the present invention. Accordingly, the invention should only be limited by the Claims included below.

Claims

1. A content deployment system comprising: a plurality of nodes connected in a network topology wherein each node communicates with at least one other node via a secure connection; said plurality of nodes including at least one sender configured both to receive content and deploy content to other nodes across network domains and platforms, and at least one receiver configured to receive deployed content; an administration module; a module for defining and implementing routed and/or multi-tiered deployments; and means for making transactional deployments and rolling them back in case of failure.
2. The system of Claim 1 , wherein said at least one sender comprises a base server.
3. The system of Claim 2, wherein said at least one sender is configured to receive content for deployment from at least one content repository.
4. The system of Claim 2, wherein said at least one sender is configured to receive content deployed from another sender.
5. The system of Claim 1 , wherein deployments are pushed from one node to another.
6. The system of Claim 1 , wherein deployments are pulled from one node to another.
7. The system of Claim 1 , wherein said deployed content comprises any of: data files; database updates; markup code; media files; and application code.
8. The system of Claim 1 , wherein said connection is secured by means of an authentication process that ensures that communication is with a known machine in a known manner and that data is received from the known machine without interception by a third party.
9. The system of Claim 8, wherein a receiver is configured to listen on a specific port for connection from a firewall's specific IP (Internet Protocol) address.
10. The system of Claim 9, wherein said receiver is configured to receive content only from a known, trusted source.
11. The system of Claim 8, wherein said connection is secured by a certificate-based authentication process.
12. The system of Claim 1 , wherein content deployed over said secure connection is encrypted.
13. The system of Claim 1 , wherein said administration module includes a user interface for accessing a plurality of administrative services included in said module, said user interface including any of event-driven and command- line capabilities.
14. The system of Claim 13, wherein said plurality of administrative services includes at least one service for managing user rights and authentication.
15. The system of Claim 14, wherein said system provides single sign-on capability, so that a user authenticates on said system using authentication credentials for an underlying operating system or user directory.
16. The system of Claim 13, wherein said plurality of administrative services includes: at least one service for managing deployments; at least one service for creating and viewing schedule entries for automated deployments; at least one service for any of viewing, editing and uploading deployment configurations; at least one service for viewing and managing base servers and receivers; at least one service for creating and running deployment report queries; at least one service for assigning access rights to base servers and receivers and restricting users' rights to initiate deployments; at least one service for configuring database auto-synchronization; and managing syndicated content offers and subscriptions.
17. The system of Claim 1 , further comprising a module for providing event triggers for integrating external programs into a deployment, wherein external tasks are launched on source and/or target systems.
18. The system of Claim 1 , further comprising a module that invokes a payload adapter module at the start of a deployment for integrating a sender with an arbitrary source, and wherein said payload adaptor prepares a payload of files to be returned to said sender from the source.
19. The system of Claim 1 , further comprising a module that invokes a delivery adaptor after files are deployed to a target, wherein said delivery adapter is invoked with a manifest of deployed files, and wherein said adapter processes said deployed files.
20. The system of Claim 1 , further comprising an interface for programmatic access to system services.
21. The system of Claim 1 , wherein said system is scalable to include multi-tiered development topologies including servers inside and outside of firewalls.
22. The system of Claim 1 , wherein deployments distribute only incremental changes between a source and each target.
23. The system of Claim 1 , wherein a deployment is defined by means of a deployment configuration, said deployment configuration comprising a machine-readable file that describes a deployment strategy.
24. The system of Claim 23, wherein a sender is configured to deploy to multiple targets in parallel.
25. The system of Claim 24, wherein said deployment configuration defines logical nodes and replication farms that are independent of a physical topology.
26. The system of Claim 23, wherein said deployment configuration defines multi-tiered deployments by chaining deployments from one tier to the next.
27. The system of Claim 26, wherein said deployment configuration defines routed deployments by computing a route from a pre-defined set of route segments.
28. The system of Claim 23, wherein said deployment configuration specifies rules for including and excluding files and directories.
29. The system of Claim 23, wherein said deployment configuration specifies transfer rules to describe how data is to be handled during transfer.
30. The system of Claim 23, wherein said deployment configuration specifies permission and ownership rules for deployed files and directories.
31. The system of Claim 1 , wherein said means for tracking and rolling back transactional deployments comprises a deployment configuration, wherein said deployment configuration specifies that a deployment is transactional, and system services that roll back a deployment transaction and restores each target to its last known good state in the event of failure.
32. The system of Claim 31 , wherein each leg of a parallel deployment is synchronized with all other legs.
33. The system of Claim 31 , wherein said deployment configuration specifies a quorum for a parallel deployment, wherein said quorum comprises a defined sub-set of a total number of targets.
34. The system of Claim 33, wherein if a deployment to the quorum succeeds, successful targets are committed and failed targets are rolled back.
35. The system of Claim 1 , further comprising a logging facility, said logging facility providing means for creating log files on sender and receiver systems.
36. The system of Claim 35, said logging facility further including a log viewer for viewing said log files.
37. The system of Claim 1 , further comprising means for any of: locking down command line invocations to specific hosts; and confining user authentication for the administration module to one or more specific access services.
38. The system of Claim 2, further comprising a module for synchronizing automated deployments of content from a plurality of repositories.
39. The system of Claim 2, further comprising a data deployment module for deploying data updates to relational databases.
40. The system of Claim 39, wherein said data deployment module comprises means for any of: specifying type and location of source files; which schema mapping to use; and a target database specification;
41. The system of Claim 39, wherein said data deployment module comprises means for any of: mapping document type definition to a target database schema; and creating a wrapper configuration.
42. The system of Claim 39, wherein said data deployment module synchronizes delivery of structured content to a target database with delivery of code and unstructured content file to multiple servers.
43. The system of Claim 2, further comprising an intelligent delivery module, wherein said intelligent delivery module enables a base server to use content attributes for smart distribution and syndication.
44. The system of Claim 43, wherein said intelligent delivery module comprises means for any of: specifying deployment criteria in the form of a metadata query; and a payload adapter that supports use of a JDBC-compliant database as a metadata repository.
45. The system of Claim 43, wherein said intelligent delivery module comprises an offer/subscription management layer, said offer/subscription management layer including: means for creating an offer, wherein an offer defines a content source and a metadata query for identifying relevant assets; and means for creating a subscription, wherein a subscription completes deployment rules for an offer, including any of target nodes, schedule, and delivery mechanism.
46. The system of Claim 2, further comprising a web change management hub for automating secure provisioning of web updates.
47. The system of Claim 46, wherein said web change management hub comprises any of: means for aggregating and coordinating change sets from multiple code and content repositories; means for aggregating code, content and configuration files that are ready to be provisioned into editions, wherein an edition is deployed to a target.
48. The system of Claim 47, wherein an edition deploys incremental changes to a target server, and wherein said edition provides an efficient mechanism for recording current state of said target server at any time.
49. The system of Claim 48, wherein said content deployment system rolls back said target server to a previous state based on said recorded current state by deploying files that differ between previous and current editions.
50. The system of Claim 47, wherein an edition preserves accurate snapshots of a web application as it existed at specific points in time, so that audit requirements are satisfied.
51. A method for transactionally deploying content comprising steps of: providing a plurality of nodes in a network topology wherein each node communicates with at least one other node via a secure connection; said plurality of nodes including at least one sender configured both to receive content and deploy content to other nodes across network domains and platforms, and at least one receiver configured to receive deployed content; providing centralized administration of a system including said plurality of nodes by means of an administration module; defining and implementing routed and/or multi-tiered deployments; and making transactional deployments and rolling them back in case of failure.
52. The method of Claim 1 , wherein said at least one sender comprises a base server.
53. The method of Claim 52, further comprising a step of configuring said at least one sender to receive content for deployment from at least one content repository.
54. The method of Claim 52, further comprising a step of configuring said at least one sender to receive content deployed from another sender.
55. The method of Claim 51 , further comprising a step of pushing deployments from one node to another.
56. The method of Claim 51 , further comprising a step of pulling deployments from one node to another.
57. The method of Claim 51 , wherein said deployed content comprises any of: data files; database updates; markup code; media files; and application code.
58. The method of Claim 51 , further comprising a step of securing said connection by means of an authentication process that ensures that communication is with a known machine in a known manner and that data is received from the known machine without interception by a third party.
59. The method of Claim 58, further comprising a step of configuring a receiver to listen on a specific port for connection from a firewall's specific IP (Internet Protocol) address.
60. The method of Claim 59, further comprising a step of configuring a receiver to receive content only from a known, trusted source.
61. The method of Claim 58, further comprising a step of securing said connection by a certificate-based authentication process.
62. The method of Claim 51 , further comprising a step of encrypting content deployed over said secure connection.
63. The method of Claim 51 , further comprising a step of accessing plurality of services included in said administration module, by means of a user interface, said user interface including any of event-driven and command-line capabilities.
64. The method of Claim 63, wherein said plurality of services includes at least one service for managing user rights and authentication.
65. The method of Claim 64, wherein said system provides single sign-on capability, so that a user authenticates on said system using authentication credentials for an underlying operating system or user directory.
66. The method of Claim 63, wherein said plurality of services includes: at least one service for managing deployments; at least one service for creating and viewing schedule entries for automated deployments; at least one service for any of viewing, editing and uploading deployment configurations; at least one service for viewing and managing base servers and receivers; at least one service for creating and running deployment report queries; at least one service for assigning access rights to base servers and receivers and restricting users' rights to initiate deployments; at least one service for configuring database auto-synchronization; and managing syndicated content offers and subscriptions.
67. The method of Claim 51 , further comprising a step of providing event triggers for integrating external programs into a deployment, wherein external tasks are launched on source and/or target systems.
68. The method of Claim 51 , further comprising a step of invoking a payload adapter module at the start of a deployment for integrating a sender with an arbitrary source, wherein said payload adaptor prepares a payload of files to be returned to said sender from the source.
69. The method of Claim 51 , further comprising a step of invoking a delivery adaptor after files are deployed to a target, wherein said delivery adapter is invoked with a manifest of deployed files, and wherein said adapter processes said deployed files.
70. The method of Claim 51 , further comprising an interface for programmatic access to system services.
71. The method of Claim 51 , wherein said system is scalable to include multi-tiered development topologies including servers inside and outside of firewalls.
72. The method of Claim 51 , further comprising a step of distributing only incremental changes between a source and each target in a deployment.
73. The method of Claim 51 , further comprising a step of defining a deployment by means of a deployment configuration, said deployment configuration comprising a script that describes a deployment strategy.
74. The method of Claim 73, wherein a sender is configured to deploy to multiple targets in parallel.
75. The method of Claim 74, wherein said deployment configuration defines logical nodes and replication farms that are independent of a physical topology.
76. The method of Claim 73, wherein said deployment configuration defines multi-tiered deployments by chaining deployments from one tier to the next.
77. The method of Claim 76, wherein said deployment configuration defines routed deployments by computing a route from a pre-defined set of route segments.
78. The method of Claim 73, wherein said deployment configuration specifies rules for including and excluding files and directories.
79. The method of Claim 73, wherein said deployment configuration specifies transfer rules to describe how data is to be handled during transfer.
80. The method of Claim 73, wherein said deployment configuration specifies permission and ownership rules for deployed files and directories.
81. The method of Claim 51 , wherein said step of tracking and rolling back transactional deployments comprises steps of: defining a deployment by means of a deployment configuration, wherein said deployment configuration specifies that a deployment is transactional; and rolling back a deployment transaction by means of system services to restore each target to its last known good state in the event of failure.
82. The method of Claim 81 , wherein each leg of a parallel deployment is synchronized with all other legs.
83. The method of Claim 81 , wherein said deployment configuration specifies a quorum for a parallel deployment, wherein said quorum comprises a defined sub-set of a total number of targets.
84. The method of Claim 83, further comprising a step of, if a deployment to the quorum succeeds, committing successful targets and rolling back failed targets.
85. The method of Claim 51 , further comprising a step of creating log files on sender and receiver systems by means of a logging facility.
86. The method of Claim 85, further comprising a step of viewing said log files by means of a log viewer, said log viewer included in said logging facility.
87. The method of Claim 51 , further comprising any of the steps of: locking down command line invocations to specific hosts; and confining user authentication for the administration module to one or more specific access services.
88. The method of Claim 52, further comprising a step of synchronizing automated deployments of content from a plurality of repositories.
89. The method of Claim 52, further comprising a step of deploying data updates to relational databases by means of a data deployment module.
90. The method of Claim 89, wherein said step of deploying data updates to relational databases comprises any of the steps of: specifying type and location of source files; specifying which schema mapping to use; specifying a target database;
91. The method of Claim 89, wherein said step of deploying data updates to relational databases comprises any of the steps of: mapping document type definition to a target database schema; and creating a wrapper configuration.
92. The method of Claim 89, wherein said step of deploying data updates to relational databases comprises a step of: synchronizing delivery of structured content to a target database with delivery of code and unstructured content file to multiple servers.
93. The method of Claim 52, further comprising a step of distributing and syndicating content based on content attributes by means of an intelligent delivery module, wherein said intelligent delivery module enables a base server to use content attributes for smart distribution and syndication.
94. The method of Claim 93, wherein said step of distributing and syndicating content based on content attributes by means of an intelligent delivery module comprises any of the steps of: specifying deployment criteria in the form of a metadata query; and providing a payload adapter that supports use of a JDBC-compliant database as a metadata repository.
95. The method of Claim 93, wherein said step of distributing and syndicating content based on content attributes by means of an intelligent delivery module comprises steps of: creating an offer, wherein an offer defines a content source and a metadata query for identifying relevant assets; and means for creating a subscription, wherein a subscription completes deployment rules for an offer, including any of target nodes, schedule, and delivery mechanism; by means of an offer/subscription management layer.
96. The method of Claim 52, further comprising a step of automating secure provisioning of web updates by means of a web change management hub.
97. The method of Claim 96, wherein said step of automating secure provisioning of web updates by means of a web change management hub comprises any of the steps of: aggregating and coordinating change sets from multiple code and content repositories; aggregating code, content and configuration files that are ready to be provisioned into editions, wherein an edition is deployed to a target.
98. The method of Claim 97, wherein an edition deploys incremental changes to a target server, and wherein said edition provides an efficient mechanism for recording current state of said target server at any time.
99. The method of Claim 98, wherein said content deployment system rolls back said target server to a previous state based on said recorded current state by deploying files that differ between previous and current editions.
100. The method of Claim 97, wherein an edition preserves accurate snapshots of a web application as it existed at specific points in time, so that audit requirements are satisfied.
PCT/US2005/042732 2004-11-30 2005-11-23 System for transactionally deploying content across multiple machines WO2006060276A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/000,573 2004-11-30
US11/000,573 US7657887B2 (en) 2000-05-17 2004-11-30 System for transactionally deploying content across multiple machines

Publications (3)

Publication Number Publication Date
WO2006060276A2 WO2006060276A2 (en) 2006-06-08
WO2006060276A9 true WO2006060276A9 (en) 2006-08-17
WO2006060276A3 WO2006060276A3 (en) 2008-11-27

Family

ID=36565567

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/042732 WO2006060276A2 (en) 2004-11-30 2005-11-23 System for transactionally deploying content across multiple machines

Country Status (2)

Country Link
US (1) US7657887B2 (en)
WO (1) WO2006060276A2 (en)

Families Citing this family (218)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6146979A (en) * 1997-05-12 2000-11-14 Silicon Genesis Corporation Pressurized microbubble thin film separation process using a reusable substrate
US6883168B1 (en) 2000-06-21 2005-04-19 Microsoft Corporation Methods, systems, architectures and data structures for delivering software via a network
US7346848B1 (en) 2000-06-21 2008-03-18 Microsoft Corporation Single window navigation methods and systems
US7000230B1 (en) 2000-06-21 2006-02-14 Microsoft Corporation Network-based software extensions
US7191394B1 (en) 2000-06-21 2007-03-13 Microsoft Corporation Authoring arbitrary XML documents using DHTML and XSLT
WO2002019097A1 (en) * 2000-09-01 2002-03-07 International Interactive Commerce, Ltd. System and method for collaboration using web browsers
US20020188700A1 (en) * 2001-06-08 2002-12-12 Todd Steitle System and method of interactive network system design
US6981250B1 (en) * 2001-07-05 2005-12-27 Microsoft Corporation System and methods for providing versioning of software components in a computer programming language
US8001523B1 (en) 2001-07-05 2011-08-16 Microsoft Corporation System and methods for implementing an explicit interface member in a computer programming language
US7035944B2 (en) * 2001-09-19 2006-04-25 International Business Machines Corporation Programmatic management of software resources in a content framework environment
US7228326B2 (en) * 2002-01-18 2007-06-05 Bea Systems, Inc. Systems and methods for application deployment
AU2003215363A1 (en) 2002-02-22 2003-09-09 Bea Systems, Inc. System and method for software application scoping
US8140635B2 (en) * 2005-03-31 2012-03-20 Tripwire, Inc. Data processing environment change management methods and apparatuses
US7415672B1 (en) 2003-03-24 2008-08-19 Microsoft Corporation System and method for designing electronic forms
US7275216B2 (en) 2003-03-24 2007-09-25 Microsoft Corporation System and method for designing electronic forms and hierarchical schemas
US7370066B1 (en) 2003-03-24 2008-05-06 Microsoft Corporation System and method for offline editing of data files
US7913159B2 (en) 2003-03-28 2011-03-22 Microsoft Corporation System and method for real-time validation of structured data files
US7296017B2 (en) * 2003-03-28 2007-11-13 Microsoft Corporation Validation of XML data files
US7353509B2 (en) * 2003-05-27 2008-04-01 Akamai Technologies, Inc. Method and system for managing software installs in a distributed computer network
US7203745B2 (en) * 2003-05-29 2007-04-10 Akamai Technologies, Inc. Method of scheduling hosts for software updates in a distributed computer network
US7814093B2 (en) * 2003-07-25 2010-10-12 Microsoft Corporation Method and system for building a report for execution against a data store
US20050034064A1 (en) * 2003-07-25 2005-02-10 Activeviews, Inc. Method and system for creating and following drill links
US7406660B1 (en) 2003-08-01 2008-07-29 Microsoft Corporation Mapping between structured data and a visual surface
US7334187B1 (en) 2003-08-06 2008-02-19 Microsoft Corporation Electronic form aggregation
US7779039B2 (en) 2004-04-02 2010-08-17 Salesforce.Com, Inc. Custom entities and fields in a multi-tenant database system
US9123077B2 (en) 2003-10-07 2015-09-01 Hospira, Inc. Medication management system
US8065161B2 (en) 2003-11-13 2011-11-22 Hospira, Inc. System for maintaining drug information and communicating with medication delivery devices
US7739181B2 (en) * 2003-12-09 2010-06-15 Walker Digital, Llc Products and processes for establishing multi-transaction relationships with customers of vending machines
US7627496B2 (en) * 2004-12-09 2009-12-01 Walker Digital, Llc Systems and methods for vending machine customer account management
US20050132120A1 (en) * 2003-12-15 2005-06-16 Vasu Vijay Nomadic digital asset retrieval system
US7822826B1 (en) 2003-12-30 2010-10-26 Sap Ag Deployment of a web service
US8819072B1 (en) 2004-02-02 2014-08-26 Microsoft Corporation Promoting data from structured data files
US8762981B2 (en) * 2004-05-24 2014-06-24 Sap Ag Application loading and visualization
US7721283B2 (en) * 2004-05-24 2010-05-18 Sap Ag Deploying a variety of containers in a Java 2 enterprise edition-based architecture
US7735097B2 (en) * 2004-05-24 2010-06-08 Sap Ag Method and system to implement a deploy service to perform deployment services to extend and enhance functionalities of deployed applications
US7562341B2 (en) * 2004-05-24 2009-07-14 Sap Ag Deploy callback system with bidirectional containers
US7747698B2 (en) * 2004-05-25 2010-06-29 Sap Ag Transaction model for deployment operations
US7882502B2 (en) * 2004-05-25 2011-02-01 Sap Ag Single file update
US7877735B2 (en) * 2004-05-25 2011-01-25 Sap Ag Application cloning
US7774620B1 (en) 2004-05-27 2010-08-10 Microsoft Corporation Executing applications at appropriate trust levels
US8347078B2 (en) 2004-10-18 2013-01-01 Microsoft Corporation Device certificate individualization
US8487879B2 (en) 2004-10-29 2013-07-16 Microsoft Corporation Systems and methods for interacting with a computer through handwriting to a screen
US8336085B2 (en) 2004-11-15 2012-12-18 Microsoft Corporation Tuning product policy using observed evidence of customer behavior
US7721190B2 (en) 2004-11-16 2010-05-18 Microsoft Corporation Methods and systems for server side form processing
US7937651B2 (en) 2005-01-14 2011-05-03 Microsoft Corporation Structural editing operations for network forms
US7725834B2 (en) 2005-03-04 2010-05-25 Microsoft Corporation Designer-created aspect for an electronic form template
US8099324B2 (en) * 2005-03-29 2012-01-17 Microsoft Corporation Securely providing advertising subsidized computer usage
US8010515B2 (en) 2005-04-15 2011-08-30 Microsoft Corporation Query to an electronic form
US20060271660A1 (en) * 2005-05-26 2006-11-30 Bea Systems, Inc. Service oriented architecture implementation planning
US8200975B2 (en) 2005-06-29 2012-06-12 Microsoft Corporation Digital signatures for network forms
US8019827B2 (en) * 2005-08-15 2011-09-13 Microsoft Corporation Quick deploy of content
CN101258483B (en) 2005-09-09 2015-08-12 易享信息技术(上海)有限公司 For deriving, issuing, browse and installing system with applying and method thereof in multi-tenant database environment
US8078671B2 (en) * 2005-09-21 2011-12-13 Sap Ag System and method for dynamic web services descriptor generation using templates
US20070067388A1 (en) * 2005-09-21 2007-03-22 Angelov Dimitar V System and method for configuration to web services descriptor
US20070067384A1 (en) * 2005-09-21 2007-03-22 Angelov Dimitar V System and method for web services configuration creation and validation
US20070073771A1 (en) * 2005-09-28 2007-03-29 Baikov Chavdar S Method and system for directly mapping web services interfaces and java interfaces
US8250522B2 (en) 2005-09-28 2012-08-21 Sap Ag Method and system for generating a web services meta model on the java stack
US9454616B2 (en) * 2005-09-28 2016-09-27 Sap Se Method and system for unifying configuration descriptors
US8700681B2 (en) 2005-09-28 2014-04-15 Sap Ag Method and system for generating schema to java mapping descriptors
KR20080045752A (en) * 2005-10-14 2008-05-23 노키아 코포레이션 Declaring terminal provisioning with service guide
US8001459B2 (en) 2005-12-05 2011-08-16 Microsoft Corporation Enabling electronic documents for limited-capability computing devices
US8719815B1 (en) * 2005-12-09 2014-05-06 Crimson Corporation Systems and methods for distributing a computer software package using a pre-requisite query
US8024425B2 (en) * 2005-12-30 2011-09-20 Sap Ag Web services deployment
US20070156872A1 (en) * 2005-12-30 2007-07-05 Stoyanova Dimitrina G Method and system for Web services deployment
US7814060B2 (en) * 2005-12-30 2010-10-12 Sap Ag Apparatus and method for web service client deployment
US8010695B2 (en) * 2005-12-30 2011-08-30 Sap Ag Web services archive
US20090037451A1 (en) * 2006-01-25 2009-02-05 Replicus Software Corporation Attack and Disaster Resilient Cellular Storage Systems and Methods
US7962566B2 (en) * 2006-03-27 2011-06-14 Sap Ag Optimized session management for fast session failover and load balancing
US8086667B1 (en) * 2006-03-28 2011-12-27 Emc Corporation Providing access to managed content in rich client application environments
US7653732B1 (en) 2006-03-28 2010-01-26 Emc Corporation Providing session services with application connectors
US7640249B2 (en) * 2006-03-29 2009-12-29 Sap (Ag) System and method for transactional session management
US9485151B2 (en) * 2006-04-20 2016-11-01 International Business Machines Corporation Centralized system management on endpoints of a distributed data processing system
US7802243B1 (en) * 2006-04-20 2010-09-21 Open Invention Network Llc System and method for server customization
JP5028022B2 (en) * 2006-04-25 2012-09-19 キヤノン株式会社 Printing apparatus and document printing method
US8538931B2 (en) * 2006-04-28 2013-09-17 International Business Machines Corporation Protecting the integrity of dependent multi-tiered transactions
US8185435B2 (en) * 2006-06-16 2012-05-22 At&T Intellectual Property I, L.P. Methods, systems, and computer program products for facilitating content-based selection of long-tail business models and billing
US9830145B2 (en) 2006-08-14 2017-11-28 Federal Home Loan Mortgage Corporation (Freddie Mac) Systems and methods for infrastructure and middleware provisioning
US20080059630A1 (en) * 2006-08-29 2008-03-06 Juergen Sattler Assistant
US7827528B2 (en) 2006-08-29 2010-11-02 Sap Ag Delta layering
US7912800B2 (en) * 2006-08-29 2011-03-22 Sap Ag Deduction engine to determine what configuration management scoping questions to ask a user based on responses to one or more previous questions
US20080071555A1 (en) * 2006-08-29 2008-03-20 Juergen Sattler Application solution proposal engine
US20080082517A1 (en) * 2006-08-29 2008-04-03 Sap Ag Change assistant
US8065661B2 (en) * 2006-08-29 2011-11-22 Sap Ag Test engine
US7908589B2 (en) * 2006-08-29 2011-03-15 Sap Ag Deployment
US7831568B2 (en) 2006-08-29 2010-11-09 Sap Ag Data migration
US8131644B2 (en) * 2006-08-29 2012-03-06 Sap Ag Formular update
US20080127082A1 (en) * 2006-08-29 2008-05-29 Miho Emil Birimisa System and method for requirements-based application configuration
US7831637B2 (en) * 2006-08-29 2010-11-09 Sap Ag System on the fly
US7823124B2 (en) * 2006-08-29 2010-10-26 Sap Ag Transformation layer
US8572057B2 (en) * 2006-10-02 2013-10-29 Salesforce.Com, Inc. Method and system for applying a group of instructions to metadata
US8019720B2 (en) * 2006-10-02 2011-09-13 Salesforce.Com, Inc. Asynchronous method and system for performing an operation on metadata
AU2007317669A1 (en) 2006-10-16 2008-05-15 Hospira, Inc. System and method for comparing and utilizing activity information and configuration information from mulitple device management systems
US8234640B1 (en) 2006-10-17 2012-07-31 Manageiq, Inc. Compliance-based adaptations in managed virtual systems
US8949825B1 (en) 2006-10-17 2015-02-03 Manageiq, Inc. Enforcement of compliance policies in managed virtual systems
US9086917B1 (en) 2006-10-17 2015-07-21 Manageiq, Inc. Registering and accessing virtual systems for use in a managed system
US8949826B2 (en) 2006-10-17 2015-02-03 Managelq, Inc. Control and management of virtual systems
US9038062B2 (en) * 2006-10-17 2015-05-19 Manageiq, Inc. Registering and accessing virtual systems for use in a managed system
US9697019B1 (en) 2006-10-17 2017-07-04 Manageiq, Inc. Adapt a virtual machine to comply with system enforced policies and derive an optimized variant of the adapted virtual machine
US8612971B1 (en) 2006-10-17 2013-12-17 Manageiq, Inc. Automatic optimization for virtual systems
US8752045B2 (en) 2006-10-17 2014-06-10 Manageiq, Inc. Methods and apparatus for using tags to control and manage assets
US8234641B2 (en) 2006-10-17 2012-07-31 Managelq, Inc. Compliance-based adaptations in managed virtual systems
US8458695B2 (en) 2006-10-17 2013-06-04 Manageiq, Inc. Automatic optimization for virtual systems
US9015703B2 (en) * 2006-10-17 2015-04-21 Manageiq, Inc. Enforcement of compliance policies in managed virtual systems
KR101079592B1 (en) * 2006-11-03 2011-11-04 삼성전자주식회사 Display apparatus and information update method thereof
US9111020B2 (en) * 2007-03-24 2015-08-18 General Electric Company Architecture and methods for sophisticated distributed information systems
US8386923B2 (en) 2007-05-08 2013-02-26 Canon Kabushiki Kaisha Document generation apparatus, method, and storage medium
US20080320502A1 (en) * 2007-06-20 2008-12-25 Microsoft Corporation Providing Information about Software Components
US8490078B2 (en) * 2007-09-25 2013-07-16 Barclays Capital, Inc. System and method for application management
US20090083738A1 (en) * 2007-09-25 2009-03-26 Microsoft Corporation Automated data object set administration
US8732692B2 (en) * 2007-11-07 2014-05-20 Bayerische Motoren Werke Aktiengesellschaft Deployment and management framework
US8418173B2 (en) 2007-11-27 2013-04-09 Manageiq, Inc. Locating an unauthorized virtual machine and bypassing locator code by adjusting a boot pointer of a managed virtual machine in authorized environment
US8407688B2 (en) 2007-11-27 2013-03-26 Managelq, Inc. Methods and apparatus for storing and transmitting historical configuration data associated with information technology assets
US8375379B2 (en) * 2008-01-31 2013-02-12 SAP France S.A. Importing language extension resources to support application execution
FR2927436A1 (en) * 2008-02-12 2009-08-14 Ingenico Sa METHOD FOR SECURING COMPUTER PROGRAM, APPARATUS, METHOD FOR UPDATING AND CORRESPONDING UPDATE SERVER.
US20090234902A1 (en) * 2008-03-11 2009-09-17 Pilosof Erez System, method and apparatus for making content available over multiple devices
US8307096B2 (en) 2008-05-15 2012-11-06 At&T Intellectual Property I, L.P. Method and system for managing the transfer of files among multiple computer systems
US8769640B2 (en) * 2008-05-29 2014-07-01 Microsoft Corporation Remote publishing and server administration
US9111118B2 (en) * 2008-08-29 2015-08-18 Red Hat, Inc. Managing access in a software provisioning environment
US8135659B2 (en) * 2008-10-01 2012-03-13 Sap Ag System configuration comparison to identify process variation
US8396893B2 (en) * 2008-12-11 2013-03-12 Sap Ag Unified configuration of multiple applications
US8352912B2 (en) * 2008-12-15 2013-01-08 International Business Machines Corporation Method and system for topology modeling
US8255429B2 (en) * 2008-12-17 2012-08-28 Sap Ag Configuration change without disruption of incomplete processes
US8225281B1 (en) * 2009-02-04 2012-07-17 Sprint Communications Company L.P. Automated baseline deployment system
US9378011B2 (en) * 2009-03-19 2016-06-28 Microsoft Technology Licensing, Llc Network application versioning
US8561059B2 (en) * 2009-04-07 2013-10-15 Sap Ag Apparatus and storage device for consolidating installation and deployment of environments
US8271106B2 (en) 2009-04-17 2012-09-18 Hospira, Inc. System and method for configuring a rule set for medical event management and responses
US10482425B2 (en) * 2009-09-29 2019-11-19 Salesforce.Com, Inc. Techniques for managing functionality changes of an on-demand database system
US8584087B2 (en) * 2009-12-11 2013-11-12 Sap Ag Application configuration deployment monitor
US8677309B2 (en) 2009-12-29 2014-03-18 Oracle International Corporation Techniques for automated generation of deployment plans in an SOA development lifecycle
US20110178984A1 (en) * 2010-01-18 2011-07-21 Microsoft Corporation Replication protocol for database systems
US8825601B2 (en) 2010-02-01 2014-09-02 Microsoft Corporation Logical data backup and rollback using incremental capture in a distributed database
US20110246499A1 (en) * 2010-03-30 2011-10-06 Yuval Carmel Method and system for evaluating compliance within a configuration-management system
US8707296B2 (en) 2010-04-27 2014-04-22 Apple Inc. Dynamic retrieval of installation packages when installing software
US8930942B2 (en) * 2010-05-26 2015-01-06 Tibco Software Inc. Capability model for deploying componentized applications
US20110296310A1 (en) * 2010-05-27 2011-12-01 Yuval Carmel Determining whether a composite configuration item satisfies a compliance rule
US10210574B2 (en) 2010-06-28 2019-02-19 International Business Machines Corporation Content management checklist object
US8413132B2 (en) * 2010-09-13 2013-04-02 Samsung Electronics Co., Ltd. Techniques for resolving read-after-write (RAW) conflicts using backup area
US8661432B2 (en) * 2010-10-05 2014-02-25 Sap Ag Method, computer program product and system for installing applications and prerequisites components
US8468132B1 (en) 2010-12-28 2013-06-18 Amazon Technologies, Inc. Data replication framework
US9449065B1 (en) 2010-12-28 2016-09-20 Amazon Technologies, Inc. Data replication framework
US8554762B1 (en) 2010-12-28 2013-10-08 Amazon Technologies, Inc. Data replication framework
US10198492B1 (en) * 2010-12-28 2019-02-05 Amazon Technologies, Inc. Data replication framework
US8788669B2 (en) * 2011-01-03 2014-07-22 Novell, Inc. Policy and identity based workload provisioning
US9128768B2 (en) 2011-01-27 2015-09-08 Microsoft Technology Licensing, LCC Cloud based master data management
US20120198018A1 (en) * 2011-01-27 2012-08-02 Microsoft Corporation Securely publishing data to network service
US9584949B2 (en) 2011-01-27 2017-02-28 Microsoft Technology Licensing, Llc Cloud based master data management architecture
US9288074B2 (en) 2011-06-30 2016-03-15 International Business Machines Corporation Resource configuration change management
US8943220B2 (en) 2011-08-04 2015-01-27 Microsoft Corporation Continuous deployment of applications
US8732693B2 (en) 2011-08-04 2014-05-20 Microsoft Corporation Managing continuous software deployment
US9038055B2 (en) 2011-08-05 2015-05-19 Microsoft Technology Licensing, Llc Using virtual machines to manage software builds
US10067754B2 (en) * 2011-08-11 2018-09-04 International Business Machines Corporation Software service notifications based upon software usage, configuration, and deployment topology
US8825864B2 (en) 2011-09-29 2014-09-02 Oracle International Corporation System and method for supporting a dynamic resource broker in a transactional middleware machine environment
US9594875B2 (en) 2011-10-21 2017-03-14 Hospira, Inc. Medical device update system
US10509705B2 (en) * 2011-11-04 2019-12-17 Veritas Technologies Llc Application protection through a combined functionality failure manager
US8856295B2 (en) 2012-01-10 2014-10-07 Oracle International Corporation System and method for providing an enterprise deployment topology with thick client functionality
US8782632B1 (en) * 2012-06-18 2014-07-15 Tellabs Operations, Inc. Methods and apparatus for performing in-service software upgrade for a network device using system virtualization
US20140019573A1 (en) * 2012-07-16 2014-01-16 Compellent Technologies Source reference replication in a data storage subsystem
US9032388B1 (en) * 2012-07-18 2015-05-12 Amazon Technologies, Inc. Authorizing or preventing deployment of update information based on deployment parameters
US9612866B2 (en) 2012-08-29 2017-04-04 Oracle International Corporation System and method for determining a recommendation on submitting a work request based on work request type
US9992260B1 (en) * 2012-08-31 2018-06-05 Fastly Inc. Configuration change processing for content request handling in content delivery node
US9158528B2 (en) * 2012-10-02 2015-10-13 Oracle International Corporation Forcibly completing upgrade of distributed software in presence of failures
US9251324B2 (en) * 2012-12-13 2016-02-02 Microsoft Technology Licensing, Llc Metadata driven real-time analytics framework
US9596279B2 (en) 2013-02-08 2017-03-14 Dell Products L.P. Cloud-based streaming data receiver and persister
US9191432B2 (en) 2013-02-11 2015-11-17 Dell Products L.P. SAAS network-based backup system
US9141680B2 (en) 2013-02-11 2015-09-22 Dell Products L.P. Data consistency and rollback for cloud analytics
US9442993B2 (en) 2013-02-11 2016-09-13 Dell Products L.P. Metadata manager for analytics system
US9229902B1 (en) * 2013-02-14 2016-01-05 Amazon Technologies, Inc. Managing update deployment
US9641432B2 (en) 2013-03-06 2017-05-02 Icu Medical, Inc. Medical device communication method
AU2014312122A1 (en) 2013-08-30 2016-04-07 Icu Medical, Inc. System and method of monitoring and managing a remote infusion regimen
US9662436B2 (en) 2013-09-20 2017-05-30 Icu Medical, Inc. Fail-safe drug infusion therapy system
US10311972B2 (en) 2013-11-11 2019-06-04 Icu Medical, Inc. Medical device system performance index
US20150134719A1 (en) * 2013-11-13 2015-05-14 Kaseya International Limited Third party application delivery via an agent portal
US10042986B2 (en) 2013-11-19 2018-08-07 Icu Medical, Inc. Infusion pump automation system and method
CN104660522A (en) * 2013-11-22 2015-05-27 英业达科技有限公司 Automatic node configuration method and server system
US11003740B2 (en) * 2013-12-31 2021-05-11 International Business Machines Corporation Preventing partial change set deployments in content management systems
US10169440B2 (en) * 2014-01-27 2019-01-01 International Business Machines Corporation Synchronous data replication in a content management system
ES2824263T3 (en) * 2014-02-11 2021-05-11 Wix Com Ltd A system for synchronizing changes to edited websites and interactive applications
US9215214B2 (en) 2014-02-20 2015-12-15 Nicira, Inc. Provisioning firewall rules on a firewall enforcing device
WO2015168427A1 (en) 2014-04-30 2015-11-05 Hospira, Inc. Patient care system with conditional alarm forwarding
US9724470B2 (en) 2014-06-16 2017-08-08 Icu Medical, Inc. System for monitoring and delivering medication to a patient and method of using the same to minimize the risks associated with automated therapy
US9539383B2 (en) 2014-09-15 2017-01-10 Hospira, Inc. System and method that matches delayed infusion auto-programs with manually entered infusion programs and analyzes differences therein
US9716692B2 (en) * 2015-01-01 2017-07-25 Bank Of America Corporation Technology-agnostic application for high confidence exchange of data between an enterprise and third parties
US10320892B2 (en) 2015-01-02 2019-06-11 Microsoft Technology Licensing, Llc Rolling capacity upgrade control
US10572449B2 (en) * 2015-03-02 2020-02-25 Walmart Apollo, Llc Systems, devices, and methods for software discovery using application ID tags
EP3304370B1 (en) 2015-05-26 2020-12-30 ICU Medical, Inc. Infusion pump system and method with multiple drug library editor source capability
US10140140B2 (en) * 2015-06-30 2018-11-27 Microsoft Technology Licensing, Llc Cloud virtual machine customization using extension framework
US9755903B2 (en) 2015-06-30 2017-09-05 Nicira, Inc. Replicating firewall policy across multiple data centers
US10241775B2 (en) 2016-01-14 2019-03-26 Ca, Inc. Dynamic release baselines in a continuous delivery environment
US20170242859A1 (en) * 2016-02-24 2017-08-24 David Sazan Digital media content comparator
US10146524B1 (en) * 2016-03-28 2018-12-04 Amazon Technologies, Inc. Preemptive deployment in software deployment pipelines
US10135727B2 (en) 2016-04-29 2018-11-20 Nicira, Inc. Address grouping for distributed service rules
US10348685B2 (en) 2016-04-29 2019-07-09 Nicira, Inc. Priority allocation for distributed service rules
US11171920B2 (en) 2016-05-01 2021-11-09 Nicira, Inc. Publication of firewall configuration
US11425095B2 (en) 2016-05-01 2022-08-23 Nicira, Inc. Fast ordering of firewall sections and rules
US11258761B2 (en) 2016-06-29 2022-02-22 Nicira, Inc. Self-service firewall configuration
US11082400B2 (en) 2016-06-29 2021-08-03 Nicira, Inc. Firewall configuration versioning
WO2018013842A1 (en) 2016-07-14 2018-01-18 Icu Medical, Inc. Multi-communication path selection and security system for a medical device
US10614117B2 (en) * 2017-03-21 2020-04-07 International Business Machines Corporation Sharing container images between mulitple hosts through container orchestration
US10574624B2 (en) * 2017-10-09 2020-02-25 Level 3 Communications, Llc Staged deployment of rendezvous tables for selecting a content delivery network (CDN)
US10915516B2 (en) * 2017-10-18 2021-02-09 Cisco Technology, Inc. Efficient trickle updates in large databases using persistent memory
US10439925B2 (en) * 2017-12-21 2019-10-08 Akamai Technologies, Inc. Sandbox environment for testing integration between a content provider origin and a content delivery network
US10741280B2 (en) 2018-07-17 2020-08-11 Icu Medical, Inc. Tagging pump messages with identifiers that facilitate restructuring
US11139058B2 (en) 2018-07-17 2021-10-05 Icu Medical, Inc. Reducing file transfer between cloud environment and infusion pumps
EP3824383B1 (en) 2018-07-17 2023-10-11 ICU Medical, Inc. Systems and methods for facilitating clinical messaging in a network environment
AU2019306490A1 (en) 2018-07-17 2021-02-04 Icu Medical, Inc. Updating infusion pump drug libraries and operational software in a networked environment
US10692595B2 (en) 2018-07-26 2020-06-23 Icu Medical, Inc. Drug library dynamic version management
AU2019309766A1 (en) 2018-07-26 2021-03-18 Icu Medical, Inc. Drug library management system
US10320625B1 (en) 2018-08-21 2019-06-11 Capital One Services, Llc Managing service deployment in a cloud computing environment
US11310202B2 (en) 2019-03-13 2022-04-19 Vmware, Inc. Sharing of firewall rules among multiple workloads in a hypervisor
CN110532096B (en) * 2019-08-28 2022-12-30 深圳市云存宝技术有限公司 System and method for multi-node grouping parallel deployment
US11595255B2 (en) * 2020-01-16 2023-02-28 Vmware, Inc. Visual tracking of logical network state
US11507438B1 (en) * 2020-02-28 2022-11-22 The Pnc Financial Services Group, Inc. Systems and methods for processing digital experience information
CN112084170A (en) * 2020-08-12 2020-12-15 上海维信荟智金融科技有限公司 Ansable-based mysql-mha cluster one-key deployment method and system
US11381473B1 (en) * 2020-09-15 2022-07-05 Amazon Technologies, Inc. Generating resources in a secured network
CN113076130A (en) * 2021-03-23 2021-07-06 上海金融期货信息技术有限公司 General counter system operation and maintenance method based on SHELL script
CN113778461A (en) * 2021-09-09 2021-12-10 北京炎黄新星网络科技有限公司 Method and system for realizing automatic application deployment
CN114363332B (en) * 2021-12-27 2024-01-23 徐工汉云技术股份有限公司 Remote automatic operation and maintenance method based on distributed gateway
US20230393825A1 (en) * 2022-06-03 2023-12-07 Dell Products L.P. Automated software deployment techniques

Family Cites Families (81)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0229232A2 (en) 1985-12-31 1987-07-22 Tektronix, Inc. File management system
US5220657A (en) 1987-12-02 1993-06-15 Xerox Corporation Updating local copy of shared data in a collaborative system
US5008853A (en) 1987-12-02 1991-04-16 Xerox Corporation Representation of collaborative multi-user activities relative to shared structured data objects in a networked workstation environment
CA2025160A1 (en) 1989-09-28 1991-03-29 John W. White Portable and dynamic distributed applications architecture
US5555371A (en) 1992-12-17 1996-09-10 International Business Machines Corporation Data backup copying with delayed directory updating and reduced numbers of DASD accesses at a back up site using a log structured array data storage
US5515491A (en) 1992-12-31 1996-05-07 International Business Machines Corporation Method and system for managing communications within a collaborative data processing system
US5659747A (en) 1993-04-22 1997-08-19 Microsoft Corporation Multiple level undo/redo mechanism
US5835757A (en) * 1994-03-30 1998-11-10 Siemens Telecom Networks Distributed database management system for servicing application requests in a telecommunications switching system
US5764977A (en) * 1994-03-30 1998-06-09 Siemens Stromberg-Carlson Distributed database architecture and distributed database management system for open network evolution
US5557737A (en) 1994-06-13 1996-09-17 Bull Hn Information Systems Inc. Automated safestore stack generation and recovery in a fault tolerant central processor
US6052695A (en) * 1995-02-28 2000-04-18 Ntt Data Communications Systems Corporation Accurate completion of transaction in cooperative type distributed system and recovery procedure for same
US5675802A (en) 1995-03-31 1997-10-07 Pure Atria Corporation Version control system for geographically distributed software development
US5862325A (en) 1996-02-29 1999-01-19 Intermind Corporation Computer-based communication system and method using metadata defining a control structure
US5784548A (en) 1996-03-08 1998-07-21 Mylex Corporation Modular mirrored cache memory battery backup system
US5835712A (en) 1996-05-03 1998-11-10 Webmate Technologies, Inc. Client-server system using embedded hypertext tags for application and database development
US5857204A (en) * 1996-07-02 1999-01-05 Ab Initio Software Corporation Restoring the state of a set of files
US6151609A (en) 1996-08-16 2000-11-21 Electronic Data Systems Corporation Remote editor system
US5895476A (en) 1996-09-09 1999-04-20 Design Intelligence, Inc. Design engine for automatic reformatting for design and media
US6240444B1 (en) 1996-09-27 2001-05-29 International Business Machines Corporation Internet web page sharing
US6112024A (en) * 1996-10-02 2000-08-29 Sybase, Inc. Development system providing methods for managing different versions of objects with a meta model
US5958008A (en) 1996-10-15 1999-09-28 Mercury Interactive Corporation Software system and associated methods for scanning and mapping dynamically-generated web documents
US6088693A (en) 1996-12-06 2000-07-11 International Business Machines Corporation Data management system for file and database management
US6098091A (en) 1996-12-30 2000-08-01 Intel Corporation Method and system including a central computer that assigns tasks to idle workstations using availability schedules and computational capabilities
US5854930A (en) 1996-12-30 1998-12-29 Mci Communications Corporations System, method, and computer program product for script processing
US5983268A (en) 1997-01-14 1999-11-09 Netmind Technologies, Inc. Spreadsheet user-interface for an internet-document change-detection tool
US5898836A (en) 1997-01-14 1999-04-27 Netmind Services, Inc. Change-detection tool indicating degree and location of change of internet documents by comparison of cyclic-redundancy-check(CRC) signatures
US5913029A (en) 1997-02-07 1999-06-15 Portera Systems Distributed database system and method
US6195353B1 (en) * 1997-05-06 2001-02-27 Telefonaktiebolaget Lm Ericsson (Publ) Short packet circuit emulation
US5897638A (en) * 1997-06-16 1999-04-27 Ab Initio Software Corporation Parallel virtual file system
US6233600B1 (en) 1997-07-15 2001-05-15 Eroom Technology, Inc. Method and system for providing a networked collaborative work environment
US6230185B1 (en) 1997-07-15 2001-05-08 Eroom Technology, Inc. Method and apparatus for facilitating communication between collaborators in a networked environment
US5937409A (en) * 1997-07-25 1999-08-10 Oracle Corporation Integrating relational databases in an object oriented environment
US6256712B1 (en) 1997-08-01 2001-07-03 International Business Machines Corporation Scaleable method for maintaining and making consistent updates to caches
US6330594B1 (en) * 1997-09-02 2001-12-11 Cybershift Holdings, Inc. Multiple tier interfacing with network computing environment
US20010010046A1 (en) 1997-09-11 2001-07-26 Muyres Matthew R. Client content management and distribution system
US6240414B1 (en) 1997-09-28 2001-05-29 Eisolutions, Inc. Method of resolving data conflicts in a shared data environment
JPH11112523A (en) * 1997-10-08 1999-04-23 Fujitsu Ltd Circuit emulation communication method, its transmission device and reception device
US6018747A (en) 1997-11-26 2000-01-25 International Business Machines Corporation Method for generating and reconstructing in-place delta files
US6209007B1 (en) 1997-11-26 2001-03-27 International Business Machines Corporation Web internet screen customizing system
US6178439B1 (en) 1997-12-23 2001-01-23 British Telecommunications Public Limited Company HTTP session control
US6256740B1 (en) * 1998-02-06 2001-07-03 Ncr Corporation Name service for multinode system segmented into I/O and compute nodes, generating guid at I/O node and exporting guid to compute nodes via interconnect fabric
US6546545B1 (en) * 1998-03-05 2003-04-08 American Management Systems, Inc. Versioning in a rules based decision management system
US6646989B1 (en) * 1998-06-29 2003-11-11 Lucent Technologies Inc. Hop-by-hop routing with node-dependent topology information
US6195760B1 (en) 1998-07-20 2001-02-27 Lucent Technologies Inc Method and apparatus for providing failure detection and recovery with predetermined degree of replication for distributed applications in a network
US6226372B1 (en) * 1998-12-11 2001-05-01 Securelogix Corporation Tightly integrated cooperative telecommunications firewall and scanner with distributed capabilities
US6452612B1 (en) 1998-12-18 2002-09-17 Parkervision, Inc. Real time video production system and method
US6507863B2 (en) * 1999-01-27 2003-01-14 International Business Machines Corporation Dynamic multicast routing facility for a distributed computing environment
US20010011265A1 (en) 1999-02-03 2001-08-02 Cuan William G. Method and apparatus for deploying data among data destinations for website development and maintenance
US20010039594A1 (en) 1999-02-03 2001-11-08 Park Britt H. Method for enforcing workflow processes for website development and maintenance
US7315826B1 (en) * 1999-05-27 2008-01-01 Accenture, Llp Comparatively analyzing vendors of components required for a web-based architecture
US6421676B1 (en) * 1999-06-30 2002-07-16 International Business Machines Corporation Scheduler for use in a scalable, distributed, asynchronous data collection mechanism
US7051365B1 (en) * 1999-06-30 2006-05-23 At&T Corp. Method and apparatus for a distributed firewall
US6434568B1 (en) * 1999-08-31 2002-08-13 Accenture Llp Information services patterns in a netcentric environment
US6640244B1 (en) 1999-08-31 2003-10-28 Accenture Llp Request batcher in a transaction services patterns environment
US6839803B1 (en) * 1999-10-27 2005-01-04 Shutterfly, Inc. Multi-tier data storage system
US6339832B1 (en) * 1999-08-31 2002-01-15 Accenture Llp Exception response table in environment services patterns
US6662357B1 (en) 1999-08-31 2003-12-09 Accenture Llp Managing information in an integrated development architecture framework
US6715145B1 (en) 1999-08-31 2004-03-30 Accenture Llp Processing pipeline in a base services pattern environment
US8271336B2 (en) 1999-11-22 2012-09-18 Accenture Global Services Gmbh Increased visibility during order management in a network-based supply chain environment
US6606744B1 (en) 1999-11-22 2003-08-12 Accenture, Llp Providing collaborative installation management in a network-based supply chain environment
US6732189B1 (en) * 2000-03-20 2004-05-04 International Business Machines Corporation Method and apparatus for fault tolerant tunneling of multicast datagrams
US20010044834A1 (en) 2000-03-22 2001-11-22 Robert Bradshaw Method and apparatus for automatically deploying data in a computer network
JP2003528392A (en) 2000-03-22 2003-09-24 インターウォーヴェン インコーポレイテッド Method and apparatus for recovering ongoing changes made in a software application
US6728715B1 (en) * 2000-03-30 2004-04-27 International Business Machines Corporation Method and system for matching consumers to events employing content-based multicast routing using approximate groups
US6701345B1 (en) 2000-04-13 2004-03-02 Accenture Llp Providing a notification when a plurality of users are altering similar data in a health care solution environment
US6976090B2 (en) 2000-04-20 2005-12-13 Actona Technologies Ltd. Differentiated content and application delivery via internet
WO2001088666A2 (en) 2000-05-17 2001-11-22 Interwoven Inc. Method and apparatus for automatically deploying data and simultaneously executing computer program scripts in a computer network
US20020055928A1 (en) 2000-06-21 2002-05-09 Imedium, Inc. Methods and apparatus employing multi-tier de-coupled architecture for enabling visual interactive display
US20020194483A1 (en) 2001-02-25 2002-12-19 Storymail, Inc. System and method for authorization of access to a resource
WO2002019097A1 (en) 2000-09-01 2002-03-07 International Interactive Commerce, Ltd. System and method for collaboration using web browsers
US7209921B2 (en) 2000-09-01 2007-04-24 Op40, Inc. Method and system for deploying an asset over a multi-tiered network
US20020199014A1 (en) 2001-03-26 2002-12-26 Accton Technology Corporation Configurable and high-speed content-aware routing method
US7580988B2 (en) 2001-04-05 2009-08-25 Intertrust Technologies Corporation System and methods for managing the distribution of electronic content
US8180871B2 (en) 2001-05-23 2012-05-15 International Business Machines Corporation Dynamic redeployment of services in a computing network
EP1292144A1 (en) 2001-08-14 2003-03-12 IP-Control GmbH System, method and software for delivering content from a server to a customer
AU2002332556A1 (en) 2001-08-15 2003-03-03 Visa International Service Association Method and system for delivering multiple services electronically to customers via a centralized portal architecture
US8042132B2 (en) 2002-03-15 2011-10-18 Tvworks, Llc System and method for construction, delivery and display of iTV content
US7299033B2 (en) 2002-06-28 2007-11-20 Openwave Systems Inc. Domain-based management of distribution of digital content from multiple suppliers to multiple wireless services subscribers
US20040015408A1 (en) 2002-07-18 2004-01-22 Rauen Philip Joseph Corporate content management and delivery system
JP3675465B2 (en) * 2003-10-29 2005-07-27 ソニー株式会社 Encoding control apparatus and encoding system
JP2007096801A (en) * 2005-09-29 2007-04-12 Toshiba Corp Communication apparatus, content transmitting and receiving system, and content list management method of communication apparatus

Also Published As

Publication number Publication date
US20050080801A1 (en) 2005-04-14
WO2006060276A3 (en) 2008-11-27
WO2006060276A2 (en) 2006-06-08
US7657887B2 (en) 2010-02-02

Similar Documents

Publication Publication Date Title
US7657887B2 (en) System for transactionally deploying content across multiple machines
JP5833725B2 (en) Control services for relational data management
US7590669B2 (en) Managing client configuration data
US8307003B1 (en) Self-service control environment
US10261872B2 (en) Multilevel disaster recovery
US7133917B2 (en) System and method for distribution of software licenses in a networked computing environment
US20020004824A1 (en) Method and apparatus for automatically deploying data and simultaneously Executing computer program scripts in a computer network
US20070100834A1 (en) System and method for managing data in a distributed computer system
US8046329B2 (en) Incremental backup of database for non-archive logged servers
US20070156774A1 (en) Multi-Tier Document Management System
US20080016148A1 (en) Systems, methods and computer program products for performing remote data storage for client devices
KR100935831B1 (en) A method for a data synchronizing based on a data structure which has multiple event identifier and the Data back up solution uses the method
US20050198229A1 (en) Methods, systems, and computer program products for template-based network element management
Beach et al. Relational database service
Sabharwal et al. Administering CloudSQL
Watson Pro Oracle Collaboration Suite 10g
Ramey Pro Oracle Identity and Access Management Suite
Jeffries Oracle GoldenGate 12c implementer's guide
Minor et al. Chronopolis: preserving our digital heritage
Hall et al. DDR_EnvironmentalScan. csv
Raghav Oracle Recovery Appliance Handbook: An Insider’S Insight
Konstantinov Automated Web Application Monitoring
Blevins et al. Oracle Fusion Middleware 2 Day Administration Guide, 11g Release 1 (11.1. 1) E10064-03
Andersson et al. Microsoft Exchange Server PowerShell Cookbook
Bednar et al. Oracle Secure Backup Administrator's Guide, Release 10.1 B14234-01

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
122 Ep: pct application non-entry in european phase

Ref document number: 05852189

Country of ref document: EP

Kind code of ref document: A2

DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)