All about car tuning

1c cluster creation. General scheme of the client-server version of work

About two years ago we published material about the 1C Enterprise server on the Linux platform; interest in this topic is still great. At the same time, a lot has changed, the 1C platform does not stand still, and most often implementation goes beyond simply repeating instructions. This is not surprising, 1C Enterprise server complex product, so we decided to start this series of articles aimed at a deeper study of the subject.

Before picking up the mouse and running to the server room, you should clearly understand the required minimum knowledge, namely, have an idea of ​​the structure of the 1C Enterprise server and the purpose of its individual components. Most of the problems during implementation are due to the fact that the 1C Enterprise server is perceived as a kind of monolithic formation in which all components are interconnected in a cunning way known to only one developer. However, this is not so, and today we will figure out what our server consists of and how it all works together.

I would like to once again emphasize the extreme importance of what will be discussed below. Without this knowledge, it will be difficult to achieve stable operation, not to mention diagnosing bottlenecks and increasing productivity. The result may be a classic picture: the hardware seems to be powerful, everything was done according to the instructions, but it slows down. Unfortunately, most instructions for beginners (including ours) contain information only on how to do it, without focusing on what exactly is being done and why. So let's start fixing things.

The client-server version of 1C Enterprise is a three-level structure (the so-called “three-tier”), which includes: a client, a 1C Enterprise server and a DBMS server. These are completely independent components that can be combined in any valid combination to achieve best result. Consider the following diagram:

Let's start with clients; the current version of the platform (8.2) provides for the use of three types of clients. Let's look at them in more detail.

Fat client

This is a classic 1C client application; before the release of platform 8.2, it was the only available type of client. The thick client operation scheme is as follows: the client application requests data from the 1C server, then in turn requests it from the database and passes it back to the client, where it is processed. As you can see, this scheme is not optimal: the 1C server is essentially just a layer between the client and the database, all calculations take place on the client. This imposes increased requirements on client PCs, because The server's computing power is not used. It is worth clearly understanding that in thick client mode you will not get an increase in performance from switching to the client-server version, perhaps even vice versa.

Thin client

It can be called the main type of client application for the 8.2 platform; in theory, in practice, not everything is so smooth and we will return to this. The way it works is radically different: the client requests data from the 1C server, which receives it from the database, processes it and returns the calculation result to the client. The main computing load falls on the server, so there are no special requirements for client PCs and the channel from the client to the server.

Also, the thin client can work using either the TCP/IP protocol or local network, and via HTTP over the Internet. This requires another intermediary - a web server, which transmits client requests to the 1C server; no data processing is performed on the web server, it is used exclusively as a transport. The advantages of a thin client are clear; it allows, if available, powerful server, significantly speeds up work with the program; network traffic is also significantly reduced, which is very important for office networks.

Web client

Its existence logically follows from some properties of a thin client; indeed, if all requests are processed by the server, HTTP is used as transport, then why not use a browser for work? The way the web client works is no different from the thin client, however, today not all functions supported by the thin client are implemented and work correctly in the web client. In part, this can be corrected in the configuration; in part, the mechanism for displaying information in the browser imposes restrictions. However, 1C has a web client and it works and no one bothers you (again in theory) to work in the program while lying on the beach with a tablet.

Now about the fly in the ointment. To work properly in thin and web client modes, the configuration must run in managed application mode and support all functions in this mode. Managed application mode is the main one for the 8.2 platform and is quite radically different from what was before, including in appearance. A visually driven application can be distinguished by its new interface, distinctive features which are tabs and hyperlinks:

At the very least, it’s unusual, especially in comparison with the classic interface, but don’t rush to rejoice when you see the new interface, except appearance, the configuration must support the execution of all its functionality on the server; it may well turn out that not all features will be available in the thin and web client modes.

Today, only a part of typical configurations work in managed application mode, such as: Small Firm Management, Trade Management 11, Retail 2 and Salary and HR Management. These solutions can take full advantage of the new platform. Enterprise Accounting 2.0 does not use a managed application mode and will not work in thin and web clients, the same applies to many third-party solutions, such as “Kamin”, etc.

conclusions

If possible, you should use a thin client, as this allows you to shift all calculations to the server side and work comfortably even on slow channels, incl. through the Internet. It should be remembered that working in Configurator mode is only possible through a thick client, which will also have to be used to work with configurations that have not yet been transferred to managed application mode.

The web client should be used when it is not possible to use a thin one, for example from someone else’s PC on a business trip, but you should be prepared for the absence or incorrect operation of some functions.

1C server cluster

Having dealt with clients, let's move on to servers. The system provides for the use of three types of servers: 1C Server, DBMS server and web server. It is important to understand that the server data is completely independent of each other; this gives the system flexibility and allows rational use of computing resources.

Also, the system does not impose any requirements on platforms. You can share both Windows and Linux servers, Apache and IIS can be used as a web server, PostgreSQL, MS SQL Server, IBM DB2 and Oracle are supported from DBMS. Therefore, no one is stopping you from creating a scheme in which a 1C server running on the Linux platform will work together with a database server running Windows Server and IIS and vice versa. In addition, you can use several DBMS servers (as well as web servers) by placing different databases on different servers.

This approach allows you to flexibly combine, expand and change the existing configuration depending on current needs, while everything will be as transparent as possible for the end user. For example, you can move resource-intensive information security to a separate DBMS server by changing only the database connection parameters in the server settings without affecting the client settings.

And finally the most interesting thing: a cluster of 1C Enterprise servers. Yes, that’s right, not a single server, but a cluster of servers. Usually this is where confusion begins, especially if there is only one server. However, everything falls into place if we take into account that the concept of a server cluster is primarily logical, but this approach easily allows you to scale the scheme, increasing its performance or fault tolerance.

Any cluster consists of a 1C Enterprise Central Server and working servers. In the simplest configuration, this will be the same physical server. However, if necessary, we can add additional working servers, the load of which will be balanced by the central server. This allows you to quickly and transparently increase the computing power of the system and increase fault tolerance. The cluster also does not impose requirements for platform homogeneity; it can include servers running both Windows and Linux.

What conclusions can be drawn from the above? Firstly, the 1C Enterprise client-server system is very flexible and allows you to optimally use the available computing resources to obtain the optimal result. Which configuration to choose depends on specific tasks and the funds allocated to solve them.

For example, if you have a light load and use a thick client and a configuration that does not support managed application mode, it makes sense to combine a cluster of 1C servers and a DBMS server on one physical server, since it is very wasteful to allocate a separate machine for the layer between the client and the database.

Conversely, when using a managed application in thin client mode, it is better to separate the DBMS server and the server cluster into different servers, each of which will be optimized for its own task.

Order a demo Order

This article will consider several options for the 1C structure for high-load systems (from 200 active users) built on the basis of a client-server architecture - their advantages and disadvantages, installation costs and comparative performance tests of each option.

We will not describe, evaluate and compare the generally accepted and long-known classic schemes for constructing a 1C server structure, such as a separate 1C server and a separate DBMS server, or a Microsoft SQL cluster with a 1C cluster. There are a great many such reviews, including those conducted by software product manufacturers themselves. We will offer an overview of the 1C structure design schemes that have been encountered over the past few years in our IT projects for medium and large businesses.

Requirements for highly loaded 1C systems

Highly loaded 1C systems working with large amounts of data 24/7/365 are subject to risk factors that are not usually observed in standard situations. As a result, to eliminate and prevent them, the use of special 1C architecture schemes and new technologies is required.

DBMS disaster resistance. In the process of designing the 1C architecture, the emphasis is on computing power and high availability of services, expressed in their clustering. By default, 1C:Enterprise servers are capable of operating in a redundant cluster, and for a DBMS cluster, an industrial data storage system (SDS) and clustering technology (for example, Microsoft SQL Cluster) are usually used. However, the situation becomes dire when problems occur with the storage system itself (often, in our experience recent years– these are software problems). Then the IT engineer suddenly faces two problems - where to get up-to-date data and where to deploy it in the shortest possible time, since a data storage system with the required volume of a fast disk array is not available.

Database security requirements. Working with projects of medium and large businesses, we regularly encounter requirements for the protection of personal data (in particular, to comply with paragraphs of Federal Law-152). One of the conditions for meeting these requirements is to ensure proper security of personal data, which requires encryption of the 1C database.

When developing a scheme for highly loaded 1C systems, they usually pay attention first of all to the parameters disk system I/O on which the databases are located. But besides this, there is also active utilization of CPU resources and RAM consumption by the 1C server. Often it is precisely this type of resource that is lacking; the possibilities for hardware upgrades of the current 1C server are exhausted and it is necessary to add new 1C servers working with a single DBMS server.

Schemes for organizing clusters of 1C servers

Scheme with a cluster of 1C servers connected to a cluster with synchronous SQL AlwaysOn replication via IP. This scheme is one of the high-quality options for solving the problem of disaster resistance of the 1C database (see Figure 1). SQL AlwaysOn database clustering technology is based on the principle of online synchronization of SQL tables between the main and backup servers without end user intervention. Using SQL Listener, it is possible to switch to a backup SQL server in the event of failure of the main one, which allows you to call this system a full-fledged disaster-proof SQL cluster, thanks to the use of two independent SQL servers. SQL Always On technology is only available in the Microsoft SQL Enterprise edition.

Figure 1 - diagram of a cluster of 1C servers + SQL AlwaysOn


The second scheme is identical to the first, only encryption of the SQL databases on the main and backup servers is added. We have already mentioned that work with recent IT projects has shown that companies have begun to pay much more attention to the issue of data security, for various reasons - the requirements of Federal Law-152, raider takeovers of servers, data leaks in the cloud, and the like. So we consider this version of the 1C scheme to be quite relevant (see Figure 2).


Figure 2 - diagram of a cluster of 1C + SQL AlwaysOn servers with encryption


A cluster of 1C "active-active" servers connected to a single DBMS server via IP. In contrast to the needs for fault tolerance and security, some structures primarily require increased performance, so to speak, “all the computing power.” Therefore, maximum priority is given to increasing the number of 1C server computing clusters, into which the modern 1C platform allows differentiation Various types computing and background jobs (see Figure 3). Of course, the configuration of the main resources of the SQL server should also be up to standard, but the database server itself is presented in the singular (apparently, the calculation is made for timely backup of databases).


Figure 3 - diagram of a 1C server cluster with one DBMS server


1C server and DBMS on one hardware server with SharedMemory. Since our practical tests are focused on comparing the performance of different schemes, some kind of standard is required to compare several options (see Figure 4). As a standard, in our opinion, you need to take the layout of the 1C server and the DBMS on one hardware server without virtualization with interaction via SharedMemory.


Figure 4 - diagram of 1C server and DBMS on one hardware server with SharedMemory


Below is a general comparative table that shows the overall results for the key criteria for assessing the organization of the 1C system structure (see Table 1).


Criteria for evaluating 1C architectures Cluster 1C + SQL Always On Cluster 1C + SQL AlwaysOn with encryption
1C cluster with one DBMS server
Classic 1C+DBMS SharedMemory
Ease of installation and maintenance Satisfied Satisfied Fine Great
fault tolerance Great Great Satisfied Not applicable
Safety Satisfied Great Satisfied Satisfied
Budgeting Satisfied Satisfied Fine Great

Table 1 - comparison of options for building 1C systems


As you can see, there remains one important criterion, the meaning of which remains to be determined - productivity. To do this, we will conduct a series of practical tests on a dedicated test bench.

Description of testing methodology

The testing phase consists of two key tools synthetic load generation and simulation of user work in 1C. This is the Gilev test (TPC-1C) and the “Test Center” from the 1C: Instrumentation toolkit.

Gilev's test. The test belongs to the section of universal integral cross-platform tests. It can be used for both file and client-server versions of 1C:Enterprise. The test evaluates the amount of work per unit of time in one thread and is suitable for assessing the speed of single-threaded loads, including the speed of interface rendering, the impact of resource costs on maintaining a virtual environment, reposting documents, closing the month, calculating payroll, etc. Universality allows you to make a generalized performance assessment without being tied to a specific typical platform configuration. The test result is a summary assessment of the measured 1C system, expressed in conventional units.

Specialized "Test Center" from the 1C: Instrumentation toolkit. Test Center is a tool for automating multi-user load tests of information systems on the 1C:Enterprise 8 platform. With its help, you can simulate the operation of an enterprise without the participation of real users, which allows you to evaluate the applicability, performance and scalability of an information system in real conditions. Using the 1C: KIP tools, based on the processes and test cases, the matrix “List of Objects of the ERP 2.2 database layout” is generated for the performance testing scenario. In the 1C: ERP 2.2 database layout, data is generated by processing according to Regulatory Reference Information (RNI):

  • Several thousand nomenclature items;
  • Several organizations;
  • Several thousand counterparties.

The test is carried out within several user groups. The group consists of 4 users, each of whom has their own role and a list of sequential operations. Thanks to the flexible mechanism for setting parameters for testing, you can run a test on a different number of users, which will allow you to evaluate the behavior of the system under different loads and identify parameters that can lead to a decrease in performance indicators. 3 tests are carried out in 3 iterations in which the 1C developer runs a test emulating user work and measures the execution time of each operation. All three iterations are measured for each of the 1C structure schemes. The result of the test is to obtain the average operation execution time for each matrix document.

The indicators of the “Test Center” and the Gilev test will be reflected in the summary table 2.

Test stand

Terminal Access Server– virtual machine, used to manage testing tools:

  • vCPU - 16 cores 2.6GHz
  • RAM - 32 GB
  • I\o: Intel Sata SSD Raid1
  • RAM - 96 GB
  • I\o: Intel Sata SSD Raid1

1C Server and DBMS - physical server

  • CPU - Intel Xeon Processor E5-2670 8C 2.6GHz – 2 pcs.
  • RAM - 96 GB
  • I\o: Intel Sata SSD Raid1
  • Roles: 1C Server 8.3.8.2137, MS SQL Server 2014 SP 2

conclusions

We can conclude that, based on the average operation time, the most optimal is scheme No. 3 “Cluster of 1C “active-active” servers connected to a single DBMS server via IP protocol” (see Table 2). To ensure fault tolerance of such an architecture, we recommend building a classic MSSQL failover cluster with the database located on a separate storage system.

It is important to note that the most optimal balance of factors for minimizing downtime, fault tolerance and data safety is in scheme No. 1 “Cluster of 1C servers connected to a cluster with synchronous SQL AlwaysOn replication via IP”, while the performance drop compared to the most productive option is approximately 10%.

As we can see from the test results, synchronous replication of the AlwaysOn SQL database has a rather negative impact on performance. This is explained by the fact that the SQL system waits for the end of replication of each transaction to the backup server, not allowing you to work with the database at this time. This can be avoided by setting up asynchronous replication between MSSQL servers, but with such settings we will not get automatic switching of applications to the backup node in the event of a failure. The switch will have to be done manually.

Based on the EFSOL cloud, we offer our clients 1C server cluster for rent. This allows you to significantly save money on building your own fault-tolerant architecture for working with 1C.



1C architecture diagram

Average time to complete an operation, sec

Average percentage deviation from the standard Gilev test, conditional. units
50 users 100 users 150 users
Structure diagram No. 1 “Cluster of 1C servers connected to a cluster with synchronous SQL AlwaysOn replication via IP protocol” 0,42245 0,44433 0,4391 By 14% 25,13
Structure diagram No. 2 "Cluster of 1C servers connected to a cluster with synchronous SQL AlwaysOn replication via IP protocol with encryption" 0,435505 0,425227 0,425909 By 12% 21,65
Structure diagram No. 3 “Cluster of 1C “active-active” servers connected to a single DBMS server via IP.” 0,40901 0,41368 0,42852 By 9% 28,09
Reference. Structure diagram No. 4 "Location of the 1C server and DBMS on one hardware server without virtualization with interaction via SharedMemory" 0,36020 0,42385 0,36335 --- 34,23

Table 2 - Final table (abbreviated version) of practical tests different options building 1C systems

If in your company software Several employees use 1C, then it is enough to purchase a good server and configure it correctly. However, if the number of users has reached 150-200 people and this is not the limit, then installing a server cluster will help reduce the load on the equipment. Of course, installing additional equipment and training specialists to support the operation of the cluster will require some financial and time resources, but this is a long-term investment that subsequently compensates for all costs due to the uninterrupted operation of the system. However, much depends on correct settings cluster - productivity can be increased several times without expensive investments. Therefore, before studying the functionality and purchasing servers, you need to make sure whether you need a 1C server cluster at all.

When is it worth installing a 1C server cluster?

When designing a work scheme and calculating the required server capacity in software, errors quite often occur. At the initial stage, system administrators can level them out by increasing the number random access memory or upgrading the CPU and other components. But there always comes a time when these possibilities dry up, and the installation of a server cluster becomes virtually inevitable. It is this that will solve the main problems of highly loaded systems:

  • Equipment and network failures. For particularly important databases, it is recommended to create a server cluster that acts as a backup;
  • Insufficient database security. An additional advantage is the ability to encrypt data from software on the 1C platform;
  • Uneven distribution of load on server nodes. Solved by creating several “worker processes” that control client connections and requests;
  • In addition to solving these problems, a properly configured 1C server cluster allows you to significantly save on maintaining the stable operation of 1C applications.

Owners of small companies, faced with the above problems, may also be interested in installing a server cluster. But still, if the number of users does not exceed several dozen and the software performance does not cause complaints, then the cluster is not economically justified. It will be much more effective to upgrade the server or configure key parameters correctly. However, if a company is focused on development and increasing jobs, then it is worth thinking about creating a cluster of 1C servers in the near future.

Installing a failover cluster of servers in standard cases does not require administrators to have in-depth knowledge of the structure and logic of server equipment.

Let's consider this algorithm using the example of combining two 1C 8.2 servers into a cluster

Let's say that today you have two servers, on one of which (S1C-01) the 1C server and information databases are installed. To configure a failover cluster of servers, you need to deploy a 1C:Enterprise server on the S1C-02 server and start the workflow. Make sure that in its properties the “Usage” item is set to “Use”. There is no need to register information bases.


After this, in the 1C administration console you need to add a backup cluster with the name of the second server – S1C-02 – to the “Cluster Reservation” section. We add a backup cluster named S1C-01 to a similar section of the second server and move it to the top position. To do this, use the context menu and the “Move up” command. It is necessary to ensure the same order in these groups on both servers.

After the above steps, all that remains is to click the “Action” – “Update” button. After this, the infobases registered on the first should appear in the tree of the second server. This means that our actions led to success and now we have a failover cluster of two servers.

This is one of the simple examples of creating a server cluster, which does not concern their optimization and correct configuration. For the final implementation of a cluster for certain tasks, it is necessary to work out the issue of sufficiency of capacity and professional configuration of the resulting cluster.

Cluster load and optimization

Load testing

The most common technologies for testing a 1C server cluster are:

  1. Gilev test;
  2. Test center from 1C:KIP.

In the first case, we are dealing with a tool that allows us to evaluate file and client-server databases. It includes an assessment of the speed of the system, interfaces, lengthy operations and the amount of resources for operation. The big advantage is its versatility - it makes no difference what configuration you test with it. The output is an estimate in conventional units.

The second functionality allows you to estimate the time spent on a certain operation in the system for a predetermined number of users. At the same time, you can independently specify the number of operations, their type and sequence - the test will simulate real actions.

Based on the results obtained, you can judge whether it is worth upgrading or optimizing the server cluster.

The easiest way to speed up 1C is to increase the server characteristics. But there were cases when, due to incorrect settings after upgrading the hardware, the situation only worsened. Therefore, if you complain about freezes, it is recommended to first check the cluster settings in the administration service.

It is necessary to take full responsibility for all actions. Cluster settings can seriously impact performance and functionality, as in better side, and in the opposite way. Each setting affects all servers in the cluster. Therefore, before changing anything, you need to understand what setting up a 1C cluster is responsible for.


An extremely useful parameter for servers that are used 24 hours a day – "Restart interval". Typically, its value is set to 86400 seconds so that the servers can restart automatically once a day. This is useful for reducing the negative effects of memory leaks and data fragmentation on disks during operations.

It is very important that the fault-tolerant cluster of 1C servers is protected from memory overuse. One unsuccessful request in a cycle can take away all the power of multi-core servers. To prevent this, there are two cluster options − “Allowable memory capacity” and “Interval for exceeding the permissible capacity”. If you configure these parameters correctly and accurately, you will protect your information bases from many common troubles.

Limiting the Server Error Tolerance percentage will help identify workflows with too many failed calls. The cluster will forcefully terminate them if the corresponding checkbox is selected. This will help protect “error-free” processes from hanging and waiting.

Another parameter - “Stop processes that are turned off after” is responsible for regularly disconnecting connections to the server at specified intervals. In 1C, after work is completed, work processes hang for some time so that the data is correctly transferred to new processes. Sometimes failures occur and processes remain hanging on the server. They waste resources and it is much more useful to significantly minimize their amount.

In addition to optimizing the cluster itself, it is also necessary to correctly configure each server included in it. For the convenience of optimizing the server and checking performance, administrators use the server agent – ​​ragent. It stores information about what is running on a specific server. To obtain data on the infobases used, you must contact the server manager – rmngr.

For proper optimization, use the server cluster console and configure the following parameters for each server:

  • Maximum memory size of all worker processes. If this indicator is 0, then the system allocates 80% of RAM for processes, but if the field is 1, then 100%. If 1C and a DBMS are installed on the same server, then there is a possibility of a conflict due to memory and you need to use this setting. Otherwise, the standard 80% will be enough or calculate how much OS memory is needed, and enter the remaining amount in this field;
  • Safe memory consumption per call. The default value is "0", meaning that 1 worker process will take up less than 5% of the maximum RAM for all processes. It is not recommended to set the value “-1”, as it will remove all restrictions, which is fraught with consequences in the form of freezes;
  • Number of infobases and connections per process. These settings control how workloads are distributed across work processes. You can customize them according to your requirements to minimize losses due to excessive load on the server. If the value is set to 0, then the restrictions do not apply, which is dangerous if there are a large number of jobs.

In version 8.3, another useful feature for properly distributing the load on the server is “Manager for each service.” This parameter makes it possible to use not one server manager (rmngr), but many, each of which is responsible for its own task. This is a great opportunity to track which service is causing performance degradation and measure the amount of resources allocated to each task.

After installing this feature, the ragent server agent will reboot and instead of just one rmngr.exe in the console you will find a whole list. Now you can use the task manager to find the process that loads the system and do some fine-tuning. Their pid will help you distinguish these processes from each other. However, since this is an innovation, 1C experts recommend using this feature carefully.

Before deciding to add a 1C server cluster to your structure, you need to check the server settings. Perhaps there is a way to correct the situation without purchasing expensive equipment and training specialists to set up a 1C cluster. It is not uncommon for a professional inspection and server setup from third-party specialists to allow us to work at the old capacity for another couple of years. But in large companies the 1C server cluster remains the only solution allowing employees to work 24 hours a day.

Server cluster 1C:Enterprise 8 (1C:Enterprise 8 Server Cluster)

The 1C:Enterprise 8 server cluster is the main component of the platform, which ensures interaction between the database management system and the user in the case of client-server operation. The cluster makes it possible to organize uninterrupted, fault-tolerant, competitive work for a significant number of users with large information databases.

A 1C:Enterprise 8 server cluster is a logical concept that denotes a set of processes that serve the same set of information databases.

The following capabilities of a server cluster can be identified as the main ones:

  • the ability to function both on several and on one computer (working servers);
  • each worker server can support the functioning of one or several worker processes that service client connections within the boundaries of this cluster;
  • the inclusion of new clients in the cluster’s work processes occurs based on a long-term analysis of work process load statistics;
  • interaction of all cluster processes with each other, with client applications and the database server is carried out via the TCP/IP protocol;
  • cluster processes are running, can be either a service or an application

Client-server option. Scheme of work

In this option, a client application interacts with the server. The server cluster, in turn, interacts with the database server.

The role of the central server of the cluster is played by one of the computers that are part of the server cluster. In addition to serving client connections, the central server also manages the operation of the entire cluster and stores the registry of this cluster.

The cluster is addressed for client connections by the name of the central server and possibly the network port number. If a standard network port is used, then to connect you just need to specify the name of the central server.

During connection establishment, the client application contacts the central server of the cluster. Based on the analysis of worker process load statistics, the central server forwards the client application to the required worker process, which should serve it. This process can be activated on any working server in the cluster, in particular on the central server.

Connection maintenance and user authentication are supported by this workflow until the client stops working with a specific infobase.

Server cluster

A basic server cluster can be a single computer and contain only one worker process.

In the figure you can observe all the elements that, one way or another, take part in the operation of the server cluster. These are the following elements:

  • server cluster processes:
    o ragent.exe;
    o rmngr.exe;
    o rphost.exe;
  • data storage:
    o list of clusters;
    o cluster registry.

The ragent.exe process, called the server agent, ensures the functioning of the computer as part of a cluster. Therefore, the computer on which the ragent.exe process is running should be called a production server. In particular, one of the functional responsibilities of ragent.exe is to maintain a registry of clusters that are located on a specific working server.

Neither the cluster registry nor the server agent are integral part server cluster, but only enable the server and the clusters located on it to function.

The server cluster itself consists of the following elements:

  • one or more rmngr.exe processes
  • cluster registry
  • one or more rphost.exe processes.

Cluster manager (process rmngr.exe). It serves to control the functioning of the entire cluster. A cluster may include several rmngr.exe processes, one of which will always be the main manager of this cluster, and the remaining processes will be additional managers. The central server of the cluster should be called the working server on which the main cluster manager operates and which contains the cluster list. Maintaining the cluster registry is one of the functions of the main cluster manager.

Worker process (rphost.exe process). It is he who directly serves client applications, interacting with the database server. During this process, some server module configuration procedures may be executed.

Scalability of 1C version 8.3

Scalability of a server cluster is achieved in the following ways:

  • increase the number of managers in the cluster and the distribution of services between them
  • increase the number of worker processes that operate on a given worker server
  • increase the number of working servers that make up the cluster.

Using several managers simultaneously.

The functions performed by the cluster manager are divided into several services. These services can be assigned to different cluster managers. This makes it possible to evenly distribute the load across several processes.

However, some services can only be used by the main cluster manager:

  • cluster configuration service
  • debug item management service
  • cluster lock service.

For other services, arbitrary cluster managers are allowed to be assigned:

  • log service
  • object blocking service
  • job service
  • full text search service
  • session data service
  • numbering service
  • custom settings service
  • time service
  • transaction blocking service.

Using multiple workflows simultaneously.

On the one hand, the use of several work processes makes it possible to reduce the load of each specific work process. On the other hand, using multiple workflows results in more effective use hardware resources of the working server. Moreover, the procedure for launching several work processes increases the reliability of the server, as it isolates groups of clients that work with different information bases. A worker process in a cluster that allows multiple worker processes to run can be restarted automatically within a time interval specified by the cluster administrator.

The ability to use more worker processes (increasing the number of client connections) without increasing the load on a specific worker process results in an upward change in the number of worker servers that are part of the cluster.

Fault tolerance of 1C version 8.3

Resilience to cluster failures is ensured in three ways:

  • redundancy of the cluster itself
  • reservation of work processes
  • resistance to communication channel interruption.

Backing up a 1C cluster version 8.3

Several clusters are combined into a redundancy group. Clusters that are in such a group are automatically synchronized.

If the active cluster fails, it is replaced by the next working cluster in the group. Once the failed cluster is restored, it will become active after data synchronization.

Backup of 1C work processes version 8.3

For each of the workflows, it is possible to specify options for its use:

  • use
  • do not use
  • use as a backup.

If a process crashes, the cluster starts using the inactive one instead. this moment backup process. In this case, an automatic redistribution of the load on it occurs.

Resistance of 1C version 8.3 to communication channel interruption

Since each user is provided with his own communication session, the cluster stores data about the users who connected and what actions they performed.

If the physical connection disappears, the cluster will be in a state of waiting for a connection with this user. In most cases, after the connection is restored, the user will be able to continue working exactly from the point where the connection was lost. There is no need to reconnect to the infobase.

Sessions in 1C version 8.3

A session makes it possible to determine the active user of a specific infobase and determine the control flow from this client. The following types of sessions are distinguished:

  • Thin client, Web client, Thick client - these sessions occur when the corresponding clients access the infobase
  • Connection of the “Configurator” type - it occurs when accessing the configurator infobase
  • COM connection – formed when using an external connection to access an infobase
  • WS connection – occurs when accessing the web server infobase as a result of accessing a Web service published on the web server
  • Background job – created when a cluster worker process accesses the infobase. This session is used to execute the background job procedure code,
    Cluster console – created when the client-server administration utility accesses a worker process
  • COM administrator – occurs when a worker process is accessed using an external connection.
  • Work with different operating systems

Any server cluster processes can function as under operating system Linux, and under the operating system Windows systems. This is achieved by the fact that cluster interaction occurs under the control of the TCP/IP protocol. The cluster can also include working servers running any of these operating systems.

Server Cluster Administration Utility 8.3

The system package includes a utility for administering the client-server option. This utility makes it possible to change the composition of the cluster, manage information bases, and quickly analyze transaction locks.