ZeroC ICE

From EPICSWIKI

Internet Communication Engine (Ice)

Andy Foster, Observatory Sciences Limited, email [email protected] [mail://[email protected]] suggested looking at ZeroC Ice, see http://www.zeroc.com/index.html [1], as a possible new or alternate communication mechanism for EPICS.

This is my log of a one-afternoon investigation of Ice. I think that Ice is a very nice alternative to CORBA or DCOM, but I don't see how it could be used in an embedded environment right now. A discussion forum on the IZeroC web page mentions ongoing developemnt of an embedded version of Ice. It is targetting J2ME and Windows CE, not vxWorks or RTEMS, but that might be a better starting point.

Overview

Ice is Middleware, handling network connections, byte order, name lookup, ... In contrast to DCOM or CORBA, it promises all of this:

  • fast
  • portable
  • less complex
  • scales well to millions of distributed objects
  • open-source with well-defined API, no interoperability issues with different implementations using proprietary extensions

Ice servers implement objects and expose their interfaces. There can be multiple redundant instances. If the first servant isn't available, clients will try the next one and so on. Each request is executed at most once, not on all available servants. The same program can be server and client.

There is async. method invocation with a callback into the client when done. There is async. method dispatch on the server, so the server can handle newly received requests while there are still older requests being handled. There is one-way and batched one-way method invocation: Send a bundle of requests and ignore the result. Fast but unreliable, especially with UDP.

Language Mappings

Ice provides the 'slice' interface definition language, defines the network protocol, and the language mapping into which 'slice' is translated. This is available for C++, Java , C#, Visual Basic .NET, Python and - limited to the client side - PHP.

For example, a slice 'string' and 'sequence' translate into a STD std::string and std::vector under C++.

Network Protocol

According to the manual, Ice implements an RPC-type protocol and can run on TCP or UDP. Additional features:

  • option to use SSL for encryption.
  • optional compression to reduce mount of data on network.
  • Fixed network byte order (little endian).
  • No padding or alignment to reduce packet size.
  • Sizes encoded as byte for 0..254 or 255 + 4-byte integer for 255+ elements.

Name Resolution

'IcePack' is the Ice location service that resolves names to network addresses and optionally launches Ice servers when needed. Without it, the client needs to know the IP address and port of the running server.

Events

'IceStorm' is an event distribution service. Servers send events to it, clients subscribe to events of interest combined with QOS requests. IceStorm forwards the events. Load can be shared by running multiple IceStorm instances on different computers.

Misc. Features

Ice provides

  • persistence (per default via Berkeley-DB)
  • tools to start servers on remote clients, distribute code changes

How to use this for EPICS

Unclear:

  • How to transfer EPICS ProcessVariables?
    One Ice object per PV?
    One object per IOC and the PV data is moved around as IceStorm events?
  • Both the server and the client example code involves a lot of data copying.
    The existing channel access API tries to avoid this whenever possible, with the side effect that a client can only see the last value of a rapidly processing string record, because the CA server will only keep the most recent value and not queue copies of all values etc.

Pro:

  • Open Source, works on Windows and Unix
  • Claims of good performance
  • Considers encryption, how to use fire walls for protection as well as how to tunnel through them
  • IceStorm, a distributed event distribution service, sounds like a neat idea
  • Supports inheritance and run-time mapping: A client written to handle only "dbCommon" objects can connect to "aiRecord" or "calcRecord" objects that are derived from "dbCommon". New servers can implement new record types derived from "dbCommon", and the existing client can handle their "dbCommon" part without recompilation. One can even add new methods to "dbCommon" and existing "dbCommon" clients continue to function without recompliation.

Contra:

  • No C mapping. The only mapping of use for vxWorks or other RTOS would be C++, where std::string is used in many places, and all this might create memory issues unless that can be coerced to use free list memory.
  • The manual mentiones the IceStorm example application of a stock ticker: few pieces of info, many clients. This doesn't match the typical control system layout: 100000 PVs, many screens each interested in a few 100 PVs). Will the event distribution turn into a bottleneck in that szenario?