Why are generic file formats required

Generic branch

The generic branch is a freely configurable universal interface for data import and export.

Generic branch in general:
• customizable, flexible
• Versatile, applicable or adaptable
• high level of abstraction
• Functions are kept general so that they can be used for different data types and data structures

Generic branch (generic import / export) in relation to VARIO:
• Freely definable data imports and data exports (formats: xml, csv, xls *) with access to file directories and optional file exchange via FTP and SFTP
• Adjustable interval of automatic processing
• Requirement: at least one webshop branch (for possible order import)

The The file format is specified by the customer, or from the partner with whom the data is to be exchanged (interface description). The great strength of this concept is that Data provided directly and appropriately be so they imported into another program can be. Most other programs specify a fixed file format and an additional program or a service provider is required to take care of the data conversion. This effort is not required with the VARIO (exceptions: Edifact and BTE, see below).

Other special features are the very extensive time control options using simple rules based on natural language and the option of defining very detailed error management in order to react to incomplete or incorrect data.

The generic branch has access to all database tables, so you can all data from VARIO is imported or exported become.

The most common use cases are:
• Item data
• Stock levels
• Price lists
• Documents (orders, delivery notes, invoices)
• Address details
• Export of images for online shops
• Logistics systems / shipping service providers

File formats:
• Excel
• Edifact (in cooperation with EDI4ALL)
• BTE (in cooperation with FEE)
• Images (only as export)

Transmission routes:
• http
• Network drives
• Local directories

Customer information

Basically, all tools are available to the customer that we use when creating the configurations, so that he can create and edit the mappings himself, however, we advise based on the Thickness and complexity of the interface from it.

Since we Requests With the most varied of formats, we cannot make general statements about the prices. We need from the customer Sample data or a specification of the file formatin order to be able to create an individual offer. The customer needs one
Maintenance contract, since for the use of the generic branch there is always the latest VARIO version is needed.

The Customer care takes place depending on the complexity of the Consulting or development. With this complexity, support from the hotline is simply not possible. It is therefore an independent project, the customer becomes a project Manager assigned who takes care of the necessary configurations. Once the 'Generic Branch' module has been purchased, you can any number of these universal interfaces be used .

Implementation process

Due to the diverse formats and requirements, the connections are in the form individual projects implemented, which roughly follow the following sequence:

1. Requirements elicitation
First, a project supervisor works out the requirements of the project with the customer.
• With whom should data be exchanged? How many remote stations are there?
• Which data should be transferred? Receipts, article data, addresses?
• Which data format should be used?
• How should the data be transferred?

2. Preparation of offers
After viewing the sample data or specifying the file format, an individual offer and a schedule are created.

3. Technical implementation
After the order has been placed, the technical implementation of the interface follows. For this purpose, file descriptions are created which define how the data must be processed by the generic branch.

4. Test phase
As soon as the technical implementation is ready, a test run follows, which serves to uncover any problems and to familiarize the customer's employees with the new interface.

5. Live operation
After the successful test phase, live operation follows in normal everyday use.

6. Maintenance / care
The connection is not set in stone. Should new requirements arise or changes become necessary due to new program versions at the business partner, so are at any time
Adjustments possible to respond.


For the creation of a time and cost plan, some information is necessary, which can be worked out with the following questionnaire.

Which remote stations are there?

Every company and every program / web service with which data is to be exchanged counts as a remote station.

A list with all remote stations and their roles is required:
• Company A (major customer)
• Company B (supplier)
• Program XY (address book of the telephone system)

Which data should be exchanged?

A list of the message types and the direction of transmission are required for each remote station.

Company A (major customer)
• Import of orders
• Export of delivery notes
• Export of invoices
• Export of inventory reports

Company B (supplier)
• Export of orders
• Import of article descriptions
• Import of invoices

Program XY (address book of the telephone system)
• Export of the contact persons with their telephone numbers

What does the file format look like?

In order to be able to estimate the complexity of the file format and thus the work involved in the implementation, an example file or a description of the file format is required for each message (i.e. each item in the list above).

How is the data transferred?

In addition to the question of which protocol is used for the transmission, the question of who provides the server may also need to be clarified.

• Company A (major customer): The FTP server of the VARIO user is used
• Company B (supplier): The supplier's SFTP server is used
• Program XY (address book of the telephone system): The data is stored on a network drive

With drop shipments:
Are reports required to print delivery notes / invoices in the partner company's design?

If necessary, organizational points still need to be clarified:
• Provision of the server for the RS
• Integration of the interface in the company processes
• Training of employees


Creation of a new branch

Open the menu item 'Manage branches'In your VARIO under Master data - branches / sales channels.

Create a new branch and choose the Branch type Web shop replication'Off, forgive one Short name and choose the associated address Your branch from:

Switch to the equestrianWebshop'And select the Webshop typeGeneric'. You also need one Stock no and the Storage area for the import choose:

After this to save The corresponding configurations must be created for the branch. These are different for each application. When you click on 'Create configuration‘Opens the Mapping editor:

There are different types of configurationto which you Mappings, constants or SQL’s can deposit:

In the tab 'execution‘Can, for example, be the Time control can be made:

Basics of the mapping editor

The central tool for creating and editing the mappings is the mapping editor.

The mapping editor can be roughly divided into three areas:
• The top bar
• The left part with the tabs for defining the mapping
• The right part for help and output

First of all, the top bar is interesting, which contains some elementary functions and settings.

The name and the type of mapping are defined using the first three red-colored fields. The type of mapping and the file format are essential for the mapping editor to display the correct tabs.

The name has no relevance for processing, but it should still be chosen sensibly because it is used to display the mapping in the mapping management.

The small red square in the middle of the bar is used to activate or deactivate the mapping. Deactivated mappings are ignored by the replication server.

The mapping (i.e. the description of the file format) can be exported to a file using the two buttons at the top right, so that it can be transferred to the customer system and imported again with the second button. This function is also useful for creating a backup before making changes to the mapping.

The lower part of the mapping editor is divided into a left and a right part.

By default, the left part is used to define the mapping and the right part of the output, but the tabs can be dragged from one side to the other using drag and drop, for example to simultaneously set the tab with the mapping and the tab with the sample file to be able to look at.

The Help tab on the screenshot is particularly useful and is described in more detail in a separate tutorial.
On the left-hand side, the Execution and Transfer tabs are relevant for all mappings.

First to the Transfer tab:

Here it is defined which files are imported or exported. When importing, the * can be used in the File name field. B. * .xml. When exporting, placeholders can be used for the date: e.g. B. export-YYYY-MM-DD-hh-mm-ss.xml

The directory has two functions: If data transfer via (S) FTP is selected in the lower part, then it serves as a temporary working directory. If the files are only stored locally, then it is used as the input / output directory.

Archive: As the name suggests, the archive directory is used to archive imported and exported files, as these are normally removed from the working directory after processing.

Error: Files where something went wrong are stored in the error directory.

If you select a protocol for the transmission in the transmission combo box, the appropriate input fields for configuring the transmission are automatically displayed in the lower part.

After the output or transmission has been configured, we come to the Execution tab.

In the large red marked field, rules can be used to define when the mapping should be carried out. Since the options for time control are quite extensive, these are explained in more detail in a separate tutorial.

In the upper part of the tab there are two marked fields. The group is used to group several mappings. The background is that some customers use 60 or 70 mappings, which would be quite a load for a replication server. Therefore, the mappings can be divided into several groups and several
Replication servers can process one group at a time.

The second field, Order, determines the order in which the mappings are processed. This allows dependencies to be taken into account. E.g. that the articles are imported first and then the orders.

The two buttons Reset time and Force execution are used to control the replication server.

In the case of a mapping that should only be carried out once a day, the "Reset time" button removes the marking that the mapping has already been carried out and the mapping will be processed again the next time the replication server is run.

The Force execution button deactivates the time control once, so that the mapping is always executed the next time the replication server is run, regardless of the date, day of the week and time.

These are the basic settings of the mapping editor that are required for every mapping. The other tabs are explained in more detail in the context of further tutorials.

Integrated help function

The help function in the mapping editor is intended to look up the key combination that you currently need or to read the details of the command you are currently using.

When the mapping editor is started, the Help tab is automatically displayed, in which the available keyboard shortcuts can be seen. This basic help can be called up at any time via F1.

In the tabs Mapping, Constants and SQLs, depending on the mapping type and file format, macros and a context menu with suitable code elements are available.

For the macros, a short name is entered in the editor, which is replaced by a larger block of code by pressing the Shift and Space keys.

z. B. The basic structure for the external SQLs, which can be seen in the screenshot, is generated by entering SQLs and the key combination.
The available macros are displayed when you click in the corresponding editor window and press F2.

In addition to the macros, there is also the context menu, which is particularly required for imports in order to insert the string functions for editing the imported data into the mapping.

If you press the CTRL key when clicking a point in the context menu, an explanation of the command you clicked is displayed in the Help tab.

Create export mappings

The Universal export The Generic Branch allows any data from the VARIO 8 database in files of the types XML, CSV and Excel to export.

When creating the mapping, the desired file format is initially irrelevant for the procedure.

As a basis for the export, the SQLs createdwith which the required data are compiled. The result of the SQL queries then becomes a Raw version of the mapping generated, which is then adapted step by step to the specification of the customer.

The first step is to create a new mapping and select the Universal Export type and the desired file format in the header.

As already mentioned, the SQLs are defined first SQLs tab be opened. To make work easier, the tabs include mapping, constants and SQLs Macros available, with whom Code blocks can be generated.

The available macros are displayed when you click on the corresponding Editor field clicks and F2 presses.

On the screenshot we can see that that Macro sqls A basic structure for the export is created, so we enter sqls in the editor and then press the SHIFT and SPACEBAR. The key combination turns the sqls into the code block that can be seen on the next screenshot:

In the same way, further queries for 1: n relations can be created with query, etc.

The SQLs for a document export could look like this:

from bek where bek.adressnr
where bek.adressnr =: adressnr
and known document type in (’03‘)
and known document status> = '3'
and not exists (select bek_mov.id
from bek_mov
where bek_mov.bek_id = bek.id
and bek_mov.repliziert_jn = ’Y‘)
order by bek.belegschluessel

from bep
where bep.bek_id =: QUERY_BEK.ID

insert into BEK_MOV (bek_id, replicated_jn) values ​​(: QUERY_BEK.ID ,, ’J‘)


Note: This tutorial is about explaining the basic procedure for creating an export mapping using an example that is as simple and clear as possible. For this reason, the mapping is trimmed to a few fields and it is not a reference mapping that should be used in a customer project.

What is particularly noticeable about the example is that it is a Mix of XML and SQL acts. The XML framework is used to structure the SQLs.

The knot <SQLS> (and the associated ) is the Root node, which marks the beginning and the end of the XML file. In between we can create any number of QUERY_, SUCCESSSQL_ and ERRORSQL_ nodes.

The QUERY_ node define the SQL queriesthat provide us with the data for export Select or one Execute block with return values.

Within the QUERY_ node, multiple SQL nodes that contain the SQL code to be executed, but also other QUERY_ nodes and also SUCCESSSQL_ and ERRORSQL_ nodes that are executed after a successful or unsuccessful export.

Since SQL Code special character contains, which must also be used by XML, the SQL code between the markings and  ]]> stand. This way, the XML parser recognizes that it is a text with special characters and not incorrect XML code.

In the example, the QUERY_BEK that defines the Document headers determined. We will discuss the filters used in this request below.

For each document header, we also need the line items that the QUERY_BEP delivers. Because the QUERY_BEP node is defined within the QUERY_BEK node, the mapping editor knows that it is a subquery or 1: n relation and creates the mapping accordingly later.

That’s interesting

where bep.bek_id =: QUERY_BEK.ID

over : QUERY_NAME.FIELDNAME can we the Values ​​of the higher-level QUERY as parameters in the SQL and use it to create the link for the 1: n relation.

A note at this point: If an Execute Block is used instead of a Select, the transfer of the values ​​only works for the parameters of the Execute Block:


Within the Execute Block you can then use : BEK_ID to be worked, but : QUERY_BEK.ID is not available there.

In addition to the two QUERY_ nodes, there is also the node in the example SUCCESSSQL_BEK

The SUCESSSQL_ nodes are bound to a QUERY and are nexecuted after each successfully exported data set. Similarly, there is also ERRORSQL_ node in the event of an error.

The node SUCCESSSQL_BEK is used for the respective Mark the receipt as exported, in which an entry is created in the table BEK_MOV. The link to the current document header is re-established via: QUERY_BEK.ID.

SuccessSQL also explains the exists condition in QUERY_BEK, since only those documents are to be exported that have not yet been marked in BEK_MOV.

Then there would be the one in the QUERY_BEK Parameter: address nothat takes us to the next tab.

The SQLs are often quite complex and in many cases very similar. The same fields are almost always required for receipts and the similarities are even greater with price lists.

In order to be able to use the SQLs again more easily without having to change the address or customer number in numerous places, it is possible to use them as Parameters in the Constants tab define.

In a later tutorial, we will go into how to maintain the SQLs centrally in a master mapping and only adjust the parameters in the derivations.

The definition of the parameters in the constants tab is quite simple. By macro params and SHIFT + EMPTY will that Basic structure created and then defined as many param nodes as necessary. To the left of the equal sign is the name of the parameter without the colon, in this case ADRESSNR and to the right the value z. B. 100011.

In addition to the self-defined parameters, the export also provides the parameters: from date and till date that define a time period for the data and are explained in more detail in the tutorial on time mapping.

After the parameters and the SQLs have been defined, you can now use the Button at the bottom left the real one Mapping generated become.

From this point on, the differences between the file formats come into play.

XML export

In the case of XML export, the mapping largely corresponds to the file that is to be generated. Instead of the values, there are placeholders in the form Query_name.field are used and the nodes that are to be created multiple times are assigned a query as an attribute.

The screenshot shows the Differences between mapping and the generated file:

In the mapping on the left, the node BEK contains the attribute QUERY_BEK, therefore a BEK node is created for each data record that the query delivers. The attribute SUCCESSSQL_BEK assigns the query the SQL that is executed for each successful record. The value ExternalSQL refers to the fact that the SQL was defined in the separate SQLs tab and not within the mapping, as was previously the case with older mappings.

The exported document has two positions, hence the Knot BEP twice generated.

In the example mapping there are two peculiarities:

On the one hand, the knot constant, which shows that texts that do not correspond to any query field or parameter name are transferred to the generated file as constants.

A rather rare case is that Use of XML attributesas it did with the Document number you can see what, however, does not bother us, because here, just like with normal XML nodes, simply the field name or the constant is entered.

In conclusion, it can be said about the XML export that zthree possible courses of action gives:

Either you take a sample file from the customer and replace the sample data with the placeholders, or you create a basic mapping and adapt it step by step to the customer's specifications. How to proceed is a matter of taste.

CSV / Excel export

Since the mappings for the formats Excel and CSV Identical except for one detail, a distinction makes no sense at this point. For reasons of readability, only CSV will be mentioned in the following, but everything applies 1: 1 also for Excel.

Basically, the mapping for CSV files is described in the form of an XML file.

There are 3 main elements:

in the Root node CSV some basic settings are made for the complete file. The attribute Delimiter defines that delimiterthat is used in the CSV file. Since Excel files do not have a separator, the attribute is useless there and can be omitted. This is the detail that distinguishes the two formats.

Headline defines whether a line with headings should be output.

Quoted specifies whether the values ​​are put in quotation marks. Possible values ​​are NO, SINGLE and DOUBLE for none, single or double quotation marks.

The Recordtype node are the second important element via which different record types can be defined, e.g. B. in a line with the document header followed by N lines with the corresponding items, then the header of the next document, etc. to be output.

A 1: N relation You can define between two data record types such as the document header and its positions by nesting the record type nodes.

The third element is that Column knot via which the columns are defined.

On the one hand, the knots are decisive here POSITIONwho the Column number specifies. Numbers (1, 2, 3, ...) as well as Excel column numbering (A, B, C, ..., Z, AA, AB, ...) can be used. Since CSV sample files are often opened with Excel and often contain many empty spades, Excel numbering has proven itself.

It is important that only the Position as a criterion for the order of the columnn is taken. The order of the column nodes in the mapping can be freely selected and omitted column numbers are automatically filled with empty columns. If there are several empty columns at the end, only the last column needs to be definedso that the export recognizes how many empty columns it has to fill out.

The knot Databasefield specifies what is to be written to the file. Here you can either There are references to the queries or constants.

The knot Name is optional and serves to do so headlines for the columns to be defined, if in the root Headline to YES was set. The heading only makes sense in cases where only one record type has been defined.


After the mapping has been created, the Test phase. To be able to start the tests The directories are set in the Execution tabbecause some tests create temporary files in the background.

The Test functions can be found in the right part of the editor in the tab Output and Log. The Output tab is used to display the files that have been generated. In the case of XML files, the file is displayed; in the case of CSV and Excel files, there is the option between the Text view and a grid view to switch back and forth.

In tab log the messages of the export are output, such as B. Notes on faulty SQLs.

There are two tabs on the bottom right of both tabs Test and run buttons to find.

The button Testing gives in the output tab the Content of the file to be generated out. In test mode, the Success and ErrorSQLs are not executed and files are not created.

The second button To run carries out the complete mapping exactly as like the replication server would do, including time check, all Success and ErrorSQLs, generating and, if necessary, transferring the file to the (S) FTP server. The function can for the manual execution of mappings can be used, but the main purpose when creating mappings is to save the constant switching between mapping editor and replication server.

The selection box for the Test date is used to test the time control and to check whether the period of the exported data is correct. More information can be found in the tutorial on time control.

Create import mappings

The generic branch offers various imports such as the Universal import, the BEK, BSK and ART Import to. Technically, they are all based on the Universal Import and only differ in that some constants are set automatically to make it easier to work when creating the mapping, so the tutorials only use differentiated between the file formatsthat require different mappings.

As both CSV as Excel Files that save data in tabular form are treated equally by Generic Branch in both formats. The only difference is that Definition of the separatorwhich is superfluous with Excel files. Therefore, the information from this tutorial can be used 1: 1 for Excel files, even if only CSV is mentioned in the further course for the sake of simplicity.

Import of CSV / Excel files

There are basically two ways of storing data in a 1: n relation with tables

In the first variant become different Record types defined, which are marked accordingly at the beginning.

Follow one in the example Document header the associated Positions and then it goes on with the next receipt.

The second variant corresponds to one SQL join of the tables

In the example, the first Columns with the data of the document header repeated at each position.

The generic branch can process both variants, but a few must be included in the mapping particularities must be observed so that the data can be processed correctly.

Variant 1: CSV with several data records

To do this, we go to the tab Mapping and click on Create mapping.