This part plays the roles of the Account Service and the accounts payable client. It transforms method invocations into accounts payable requests, waits for the accounts payable replies, and uses the results to complete the original methods. This part is called the server wrapper.
Figure 1. System Context - Server Wrapper
This server wrapper is configured with Guardian File System and TCP/CS server protocols. Because the Factory has a PERSISTENT_LIFESPAN, the server wrapper could be configured as a server pool.
The server wrapper uses the NonStop DOM Naming Service to bind the Factory’s reference to the name Server_Wrapper.
The client asks the naming service to resolve the Server_Wrapper name/reference. It then narrows the result to Account_Service::Factory and invokes the Create method. The client performs Inquiry, Credit, and Debit methods against the resulting Account_Service::Account reference.
Because the Naming Service and Account Service are both configured to use_comm_server, the IIOP profile in their IORs will contain the LSD’s address, allowing it to pick the best CS for subsequent traffic to the Service. Note that for this version of NonStop DOM 2.0, this adds no value.
Figure 2. Internal Process Architecture of Server Wrapper
The internal architecture of this wrapper is basically the same as any NonStop DOM 2.0 process. The NSDEvent parts are used to normalize the interaction with the NonStop Kernel platform. The NSDORB parts use NSDEvent parts to perform the work of sending and receiving GIOP messages, and queue requests for the Portable Object Adapters (POAs). The POAs dispatch methods on the factory and workers through the stubs generated by the IDL compiler.
The difference between a server wrapper and a pure CORBA application is shown by the Worker. Its methods are dispatched the same way, but the method implementation involves communicating with the wrapped system using NSDEvent components directly.
Figure 3. Class Hierarchy
Like all CORBA servers, the implementation of an IDL interface is accomplished by deriving from the POA class generated by the IDL compiler. The resulting implementation class then overrides the C++ methods defined by the interface. The Factory is a simple example of this.
In addition to inheriting from the POA class, the Worker inherits from the Fw_Event_Handler class and overrides the handle_event method. The Worker has a condition variable that is used while awaiting the completion of the SERVERCLASS_SEND_ operation. The Worker uses transformation functions to convert between method parameters and ISO8583 messages. Lastly, the Worker introduces a method of its own to perform a SERVERCLASS_SEND_, wait for the completion, and interpret the results.
The server wrapper’s main function is basically the same as any NonStop DOM 2.0 server main function. It initializes the ORB, builds its POAs, then generates any permanently resident object instances – in this case, the Factory. It binds the name Server_Wrapper to the Factory using the Naming Service. Then it lets the ORB run, waiting for remote method invocations.
The create method creates an instance of Worker, activates it with the root POA, and returns a duplicate instance as the result – nothing fancy.
The three methods inherited from the Account interface have the same basic form, and to a lesser extent all server wrapper methods have this form. They are responsible for bridging between CORBA method invocations and the protocols of the system being wrapped.
Figure 4. Server Wrapper Method
The job of do_pathsend is to issue a nowaited SERVERCLASS_SEND_ and wait for the completion. The Fw_MD parameter contains the data to be sent (in this case an ISO8583 request message) and will be used to hold the associated response data (ISO8583 reply message) if any.
Together with Fw_Message, Fw_MDs (also called message segment descriptors) make up the flexible abstraction of contiguous message data. Fw_MDs can be very large, or they can be strung together in a Message. For this example, the data flows are small enough to fit into a single default-sized MD.
The Fw_MD parameter’s methods are used to drive several of the SERVERCLASS_SEND_ parameters.
| SERVERCLASS_SEND_ Parameter | MD method |
|---|---|
message-buffer |
get_ip_base( ) |
request-length |
get_iv_data_bytes( ) |
maximum-reply-length |
get_iv_size( ) |
NonStop TS/MP uses a single file number for communications between a client process and the Blinkmon /Bin its CPU. The NSDEFw_GCF (Guardian Context Free) static public data member cv_file_number is used to share this information.
The tag used to in the NonStop TS/MP call is obtained from the NSDEFw_GFS (Guardian File System) static public method generate_unique_tag. This method provides process unique file system tags. Note that the same tag could have been reused for every call.
When the NonStop TS/MP operation is issued successfully, you can instantiate an Fw_Event to track the completion. The Worker is supplied as the event handler to be called upon completion, the file number is the NonStop TS/MP shared file number, and the tag is the file number’s qualifier. When an event is created, it registers itself with the NSDEvent core.
The Worker then waits for the NonStop TS/MP operation to complete by waiting on its condition variable. This blocks the method thread and allows other threads to run. When the Worker’s condition variable is signaled by its handle_event method, the NonStop TS/MP operation has completed. The event carries the information associated with the completion (file error, bytes received). This information is interpreted: if there are any errors, an appropriate exception is generated. If no errors are detected, the MD is updated based on the event’s completion data.
The NSDEvent core has a separate event thread that runs at low priority. When no other work is pending in the process, the event thread makes a call to COMMON_COMPLETION_. When the NonStop TS/MP operation completes the associated event’s file number and tag will match, and the event core will make an up call to the event’s handler on the event thread.
This method simply calls signal on the Worker’s condition variable. This allows the method thread to continue once the event thread has run its course.