V4 2nd Design Meeting Notes
2nd Design Meeting (APS) - Notes
Who is the customer?
As there are no big machines coming up and lots of existing installations, V4 will be targeting the whole community.
Compatibility and conversion issues
V4 will not support 68k processors under Tornado2, only under RTEMS. (There might be memory footprint issues, though.)
Lower limit: 8MB/68k or 16MB/coldfire - i.e. must support small and embedded systems.
Gateways will be used to convert CA connections between V3 and V4 within one system.
We should provide a set of asara records (asara = as similar as reasonably achievable) providing V3 behaviour that can be used for automatic conversion of existing V3 databases.
DBD syntax and generation
Building blocks
It seems wrong to have real C++ declarations within the dbd file. The interface definitions should rather reside in a separate header file. (That way it would be easier to e.g. have the interfaces in different languages.)
Declarations of fields and views
The V3 prompt declaration will be replaced by a "hint" or "help" text that can be applied to any entity defined in the dbd language (not just fields). Tools may use that text to display bubble help or similar things.
The V3 group declaration will be replaced by views that can be used by the configuration tool to group things into hierarchies.
Hierarchies
Jeff envisions a "global" hierarchical namespace, where a request for quad4.ps2.current.analogValue.val would be answered by a "virtual" record that forwards and collects things from different records. Such a virtual could reside on a different node. (I would call it part of middleware functionality.) It is unclear who would be writing such a record.
Agreement on: Everything before the first dot is a record name.
V3 compatibility records will define views like
view (HIHI) { property (....) }
so that clients can still use the old syntax of <recordName>.<fieldName> to access fields.
Programmatic views will accept arguments (in parantheses). I.e. the programmatic "field" view will take the field name as an argument, as in
xxx.field(displayLimit)
Subproperties will be addressed using multiple dots, as in
xxx.value.displayLimits.upper
(This might map to a numeric property address - using an array of propertyIds - when transferred over CA.)
Clients should be able to find out if a certain property implements "analog", "enumerated", or "string" behaviour. (To make handling unknown properties easier.) Making the IOC answer these kind of questions induces a number of problems (versioning etc.) It might be possible to implement these "types" of properties as a conveniency layer library on the client side.
Record instance syntax
i18n
LOCALE techniques will be supported on systems that support it, on a level as close to the user as possible. (We're trying not to preclude someone from using LOCALE.) I.e. the dbd file (being a kind of source file) will need the "." character as a decimal point. It is the task of tools (like vdct) to convert any data in LOCALE format (e.g. using "," as decimal point character) to the standard format in the dbd file.
Data Access
Type info
The presented first version of a third set of traverse and find methods to find out the native types of properties does not seem to be efficient and desirable.
CA/DA Convenience layer / user interface
There will be a higher level interface library on top of Channel Access and Data Access that hides the inherent complexity and provides the end user with a simplified way to get to the structures, type, and data.
Collecting the requirements and defining the interfaces of that convenience layer is a task with high importance and priority. (Doug and Kay did a first step in that direction.)
Channel groups (with synchronized read and write operations) should be implemented as a separate class that calls the methods of the Channel class internally.
The V4 Data Store might implement an interface different from the opaque creation/deletion interface. That interface could have iterators etc. pp. to serve as a way different from dataAccess to analyze catalogs.
CA issues
Sending full catalogs vs. changed properties only
It is still unclear, if for a client's subscription all properties (i.e. the complete catalog) or only the changed properties should be sent.
Sending only changed events would certainly be a lot more efficient on the network. (Implementation idea: Bitmask at the beginning of a message that indicates which properties will be present in this buffer.) That might complicate the event system on the IOC, though. The database would have to provide a similar bitmask to the event manager, so that the CA server is able to decide what to put into the network buffer.
Sending the complete catalog might be less complicated in the CA server and clients, but the user would have no way to find out which of the properties in the presented catalog has changed.
Network protocol
Jeff will provide a description of his plans for the CA protocol.
Name server
LDAP seems to have some properties that makes it a very interesting candidate. The measured resolve rate of ~1500 per sec. sure makes people feel uncomfortable. If we can achieve >10000, it would stop this discussion. More tests with different server hardware and other DB variations (under the LDAP server) should show if there's a reasonable way to achieve the desired performance.
The way to go is to allow for other name services and implement a LDAP plug-in as a first attempt. Regular CA name resolution should still be the default and fallback.
Driver interface
The standard approach for the V4 driver interface will be based on the asyn approach.
Things that still seem to be missing:
- Interface that the record can use to change a device address, i.e. disconnect and reconnect with a different address
- Support for unsolicited messages, i.e. the CAN style communication, where even for a single device multiple read and/or write requests may be issued pending.
IOC middle layer
aka: The magic in the middle
We need a system of interfaces where multiple (network) servers as well as multiple (data source) services may reside on one IOC. (Example: CA server and TINE server resp. EPICS database and SNL service.)
Name resolution
As soon as there's more than one service, name resolution will be done by an interposed thing in the middle, that will call into the name resolution interfaces of all known services trying to resolve a name. Maybe this middle thing may be responsible for an eventual upload of names to a name server.
Event distribution and event queues
The "incoming" interface for an event queue should be the same as the "incoming" interface for a server. The event queue should be packaged in a library. That way any server can decide to put an event queue in front of its event interface.
The database will have to send up every change for a record's fields. V4 View Generated Code shows how the Data Access code for views is generated from dbd.
The record processing (as well as dbPutField) will post changes to fields. Decoupling of the changes done by independent entities (e.g. record processing and CA puts) will be through a transaction type operation. At the beginning of an operation, a transaction object is created (holding the dirty bits array). A reference to that transaction object ist returned to the user. This reference is passed in with every change of a field, so that it sets the according dirty bit. The flush call will post all fields that have been marked dirty for this transaction.
Big question: Should we put fixed size data chunks on the event queues or allow variable sized data chunks?
Benefit of fixed size:
- Reading out in terms of DataAccess is very fast.
Downside:
- For a given event, the catalog is fixed. There is no way to ask for "the left bacon stretcher" whenever an "alarm change" event occurs.
- Clients that want to use new event types have to be recompiled.
Benefit of variable size:
- Catalogs are completely independent from events. Events are pure filters that defines when something will be sent while the property catalog defines what will be sent.
- The mechanisms are very similar to things in the V4 Data Store, so that some mechanisms (scanning the catalog to determine storage sizes, using fixes sized free lists) might be shared or reused.
Downside:
- Double scanning and making the property catalog aware of "dirty" or "in use" bits will lead to a hit in performance.
Filtering
Filtering is done in the server.
Filters have the same "incoming" event interface, so they can be put together with event queues in any order the server finds appropriate.
CA might implement a per client filter to get more efficient filtering in cases where a bunch of clients (e.g. everything from a certain beamline or different archivers) use the same filter (e.g. "only blue beam"). In that case it would be possible for the server to put the filter in front of the multiple event queues to avoid processing it once for each client.