Configuration of multiple dpss processes within a single computer

Document created by cdnadmin on Jan 25, 2014
Version 1Show Document
  • View in full screen mode
This document was generated from CDN thread

Created by: Viktor S. Wold Eide on 12-12-2013 11:34:08 AM
Hi,

On a single VM we have started two dpss processes, each handling a separate vIOS router. The dpss config file for each router specifies a different IP address. However, each dpss process binds to 0.0.0.0 and not the IP address specified in the configuration file.

netstat -nalp --inet | grep 47 | grep dpss
raw        0      0 0.0.0.0:47              0.0.0.0:*               7           3953/dpss_mp
raw        0      0 0.0.0.0:47              0.0.0.0:*               7           3922/dpss_mp

Is this intended? One might expect that each dpss process should bind only to the interface specified?

Best regards
Viktor

Subject: RE: Configuration of multiple dpss processes within a single computer
Replied by: Raghavendra Gutty Veeranagappa on 13-12-2013 06:14:30 AM
Hi Viktor,

please try below command When an application leveraging DPSS is started and connects to a NE.

[root@onePK-EFT1 cisco]# netstat -anp | grep dpss
tcp        0      0 0.0.0.0:15009               0.0.0.0:*                   LISTEN      17307/dpss_mp
tcp        0      0 127.0.0.1:51851             127.0.0.1:9121              ESTABLISHED 17307/dpss_mp
tcp        0      0 127.0.0.1:15009             127.0.0.1:44438             ESTABLISHED 17307/dpss_mp

raw        0      0 0.0.0.0:47                  0.0.0.0:*                   7           17307/dpss_mp
[root@onePK-EFT1 cisco]#

Thanks,
Raghavendra

Subject: Re: New Message from Raghavendra Gutty Veeranagappa in onePK Developer - Tr
Replied by: Viktor S. Wold Eide on 13-12-2013 06:57:15 AM
Hi Raghavendra,

As mentioned, in this VM setup (using sdk-c64-1.0.0.84) we have two
instances of the dpss_mp process running.  We are using raw transport,
i.e., "TRANSPORT raw", is specified in the dpss config files. The following
is the output after two onep dpss applications have been started, have
connected to one router each, and are currently punting and injecting
packets.

# netstat -nap --inet | grep dpss
raw        0      0 0.0.0.0:47              0.0.0.0:*
7           4202/dpss_mp
raw        0      0 0.0.0.0:47              0.0.0.0:*
7           4169/dpss_mp
root@onepk

netstat -nap --inet | grep onep
tcp        0      0 5.1.3.130:47893         5.1.3.129:15001
ESTABLISHED 4213/onep-1
tcp        0      0 5.1.3.130:47894         5.1.3.129:15001
ESTABLISHED 4213/onep-1
tcp        0      0 5.1.4.130:52047         5.1.4.129:15001
ESTABLISHED 4180/onep-2
tcp        0      0 5.1.4.130:52046         5.1.4.129:15001
ESTABLISHED 4180/onep-2
root

You seem to have only a single dpss process (pid 17307), which listens to
both tcp and raw, while having established TCP connections?

Best regards
Viktor

On Fri, Dec 13, 2013 at 1:14 PM, Cisco Developer Community Forums <
cdicuser@developer.cisco.com> wrote:

> Raghavendra Gutty Veeranagappa has created a new message in the forum
> "Troubleshooting":
> -------------------------------------------------------------- Hi Viktor,
>
> please try below command When an application leveraging DPSS is started
> and connects to a NE.
>
> [root@onePK-EFT1 cisco]# netstat -anp | grep dpss
> tcp        0      0 0.0.0.0:15009               0.0.0.0:*
> LISTEN      17307/dpss_mp
> tcp        0      0 127.0.0.1:51851             127.0.0.1:9121
> ESTABLISHED 17307/dpss_mp
> tcp        0      0 127.0.0.1:15009             127.0.0.1:44438
> ESTABLISHED 17307/dpss_mp
> raw        0      0 0.0.0.0:47                  0.0.0.0:*
> 7           17307/dpss_mp
> [root@onePK-EFT1 cisco]#
>
> Thanks,
> Raghavendra
> --
> To respond to this post, please click the following link:
> http://developer.cisco.com/web/onepk-developer/forum/-/message_boards/view_message/22252919or simply reply to this email.

Subject: RE: Configuration of multiple dpss processes within a single computer
Replied by: Joseph Clarke on 13-12-2013 04:21:23 PM
The address is used for letting the NE known where to terminate the GRE tunnel.  I'm not sure if Linux allows one to bind for GRE on a specific address.  I'll see if Einar can comment here.

Out of curiosity, why do you need two dpss_mps?  You should be able to have multiple devices talk to one dpss_mp.

Subject: RE: Configuration of multiple dpss processes within a single computer
Replied by: Einar Nilsen-Nygaard on 13-12-2013 04:41:54 PM
Viktor,

This must be with a 1.0 SDK, as I guess you don't have the 1.1 version yet. When you do, you will find that the dpss_mp won't allow itself to be started multiple times as it now creates a pid file to prevent that happening inadvertently.

To explain why you see what you see, the dpss_mp opens a raw socket thus:

    socket (PF_INET, SOCK_RAW, IPPROTO_GRE)

Hopefully this explains the netstat output you got and explains Joe's query.

With the 1.1 SDK, a single dpss_mp can definitely handle multiple vIOS/real router instances, so I echo Joe's query. Is this for scale? Or some other reason. Incidentally, while you may be able to start multiple instances, it probably won't work as we intended the dpss_mp to be a singleton, so you will most likely find that you will get odd failures, hence why we put in checks in 1.1. And in 1.2 we will be refining the installation process somewhat, allowing the dpss_mp to be configured to start automatocally and be restarted as necessary in the way expected from typical Linux daemon processes.

Having said all that, you seem to have two processes up (which will be allowed with the 1.0 version), but while you have multiple routers up & running, do you actually see activity from both dpss_mp instances? E.g. tracking their activity with "-d all" on and seeing both instances handling packets.

Cheers,

Einar

Subject: RE: Configuration of multiple dpss processes within a single computer
Replied by: Einar Nilsen-Nygaard on 16-12-2013 05:49:43 AM
Viktor,

Yes, I can see why you might want to take this approach, and I mostly agree with the goals.

In terms of a potentially alternative way of achieving the same thing, have you considered using something like LXC? This would give you lightweight virtualization in the context of a single either VM or bare-metal deployment. I haven't fully thought this through, but maybe using something like Vagrant and it's LXC provider (https://github.com/fgrehm/vagrant-lxc/blob/master/README.md) would be useful. It would also be possible to do this more manually, but the idea of using Vagrant appeals because of the flexibility and repeatability of quickly deploying new instances. Note that on some Cisco platforms we support using LXC internally. You may have heard us (but not me specifically) talk about "service containers"?

Might require some thinking on the networking side for dealing with GRE traffic, but I think this approach is something I would prefer to going down the path of multiple dpss_mp instances running in the same OS context.

What do you think as a potential way forward?

Cheers,

Einar

Subject: RE: Configuration of multiple dpss processes within a single computer
Replied by: Einar Nilsen-Nygaard on 16-12-2013 07:06:16 AM
Viktor,

As the dpss_mp is using a raw socket that is not bound to a local IP and looking for GRE packets, each instance will, indeed, see all packets coming from the two routers, and, yes, there is internal filtering such that each will ignore packets not intended for it.

Cheers,

Einar

Subject: RE: Configuration of multiple dpss processes within a single computer
Replied by: Einar Nilsen-Nygaard on 16-12-2013 07:24:19 AM
Viktor,

I think I need to understand a little more of your desired deployment scenarios before deciding on the best way to handle your requirements. For a variety of reasons, we are actively considering the option of collapsing the dpss_mp into the client process, essentially giving clients a library that they can use without the need for a dpss_mp process at all. There are multiple reasons for this that I can go into separately if you like, but the net result of this approach would be that client processes would have more direct control over the exact configuration of where packets flow between the application(s) and the router(s) in exchange for taking more responsibility for the initial configuration. IOW, you'd have to get your hands a little more dirty at both initialization time and during packet processing. Let me list out some of the benefits and downsides:
  • Client process has to understand more about initializing transport.
  • Client would have to provide a "main loop" (i.e. do file descriptor polling, maybe using libevent or by rolling your own main loop).
  • The dpss_mp is gone as a separate process, removing the need for shared memory, removing unix domain socket comms between it & client, etc.
  • Fewer IPC hops.
  • Client apps can still interact with multiple routers.
  • Multiple clients can still interact with the same router (but still subject to the restrictions of the PSS).
  • Clients would have more control over the overall concurrency design.
  • No updates required to router software; purely client-side enhancements.
There are probably more things I could highlight, but I think this is enough to give you the flavour. I would far prefer investing in this path rather than investing in making multiple instances of the dpss_mp a viable proposition. At this point I think we will have provided appropriate flexibility for applications.

How does this direction suit your deployment scenarios, such as you are able to discuss?

Cheers,

Einar

Viktor S. Wold Eide:
Hi Einar,

It is good to see that you recognize the need. We hope that the onepk team will address this issues for the 1.1 version, as it is important in general for development and test.

We know that it might be possible to do this differently, and maybe also with LXC as you mention. Without thinking it through, we have also thought about whether it would be possible for you to co-locate both the vIOS router and a dpss within the same VM. More specifically to have both a vIOS and a dpss process within each of the /usr/bin/qemu-system-x86_64 VMs in the all_in_one_VM.

However, it is important for us that the test and development environment matches the real deployment as closely as possible, to make sure that we can catch problems as early as possible. Given that a dpss process has to be executed co-located to our onep application in our deployment, this is the setup that we would like to have for the test and development environment as well. Additionally, we do not think that other ways of achieving this should exclude the opportunity to have multiple dpss_mp processes running concurrently, although there certainly are tradeoffs that we are unware of.

Please let us know in what way you think this should be handled.

Best regards
Viktor

Einar Nilsen-Nygaard:
Viktor,

Yes, I can see why you might want to take this approach, and I mostly agree with the goals.

In
terms of a potentially alternative way of achieving the same thing,
have you considered using something like LXC? This would give you
lightweight virtualization in the context of a single either VM or
bare-metal deployment. I haven't fully thought this through, but maybe
using something like Vagrant and it's LXC provider
(https://github.com/fgrehm/vagrant-lxc/blob/master/README.md) would be
useful. It would also be possible to do this more manually, but the idea
of using Vagrant appeals because of the flexibility and repeatability
of quickly deploying new instances. Note that on some Cisco platforms we
support using LXC internally. You may have heard us (but not me
specifically) talk about "service containers"?

Might require some
thinking on the networking side for dealing with GRE traffic, but I
think this approach is something I would prefer to going down the path
of multiple dpss_mp instances running in the same OS context.

What do you think as a potential way forward?

Cheers,

Einar


Subject: RE: Configuration of multiple dpss processes within a single computer
Replied by: Viktor S. Wold Eide on 16-12-2013 05:19:36 AM
Hi Joseph and Einar,

Having just a single instance of the dpss_mp is certainly a reasonable default and we appreciate that this is supported.

On the other hand, we think it should also be possible to have several dpss_mp processes running on a single computer. Having a separate dpss_mp instance handling each router is sometimes preferable, as it more closely resembles a real world scenario where a single router is accessed by a co-located onep application and an associated dpss_mp process. Being able to have such a setup within a single VM environment seems highly beneficial for development and test purposes. As an example, for testing robustness and fault handling, one scenario would be to test the behavior of the system in case of a single overloaded or failing dpss_mp process. In this case only the onep applicaion(s) associated with the failing dpss_mp process would be directly affected. This would allow more of the overall system behavior to be tested, important to achieve graceful degradation and fault handling. When something goes wrong, it might also be easier to track down the cause having a single dpss_mp associated to each router.

In our opinion both setups should be supported - a single shared dpss_mp or multiple dedicated dpss_mp processes within a single computer / VM.

Best regards
Viktor
Joseph Clarke:
The address is used for letting
the NE known where to terminate the GRE tunnel.  I'm not sure if Linux
allows one to bind for GRE on a specific address.  I'll see if Einar can
comment here.

Out of curiosity, why do you need two dpss_mps?  You should be able to have multiple devices talk to one dpss_mp.


Subject: RE: Configuration of multiple dpss processes within a single computer
Replied by: Viktor S. Wold Eide on 16-12-2013 06:23:29 AM
Hi Einar,

This is with 1.0 SDK sdk-c64-1.0.0.84.

As metioned in another reply, having just a single instance of the dpss_mp is certainly a reasonable default and we appreciate that this is supported. However, we do think that having multiple dpss_mp processes should be possible also in upcoming versions, since it resembles real use-cases and is required for testing certain scenarios (see previous reply).

Yes, we certainly see activity in both dpss_mp processes. Each dpss process sends and receives data as they should. However, each router also receives data from the other router, as they should not (unless there is some filtering going on inside the dpss process). As can be seen below, both dpss processes receive data from both 5.1.3.129 (r3) and 5.1.4.129 (r4). They should only receive from one router each, as dpss for r3 is configured with LOCAL_IP 5.1.3.130 and dpss for r4 is configured with LOCAL_IP 5.1.4.130 in their respective dpss config files.

6node-R3#show ip interface brief
Interface                  IP-Address      OK? Method Status                Protocol
GigabitEthernet0/3         5.1.3.129       YES NVRAM  up                    up     

6node-R4#show ip interface brief
GigabitEthernet0/3         5.1.4.129       YES NVRAM  up                    up     

ps axu | grep "onepk/bin/dpss_mp"
root      4169  0.0  0.1 104360  4388 pts/16   Sl+  Dec12   1:02 ./onepk/bin/dpss_mp -c dpss-r4.conf -f
root      4202  0.0  0.1 104360  4372 pts/38   Sl+  Dec12   1:06 ./onepk/bin/dpss_mp -c dpss-r3.conf -f


Strace output from dpss for router r3:
[pid  4202] recvfrom(15, "E\0\1\206k\26@\0\377/\374-\5\1\4\201\5\1\4\202\0\0\211!\1\0\16\1\3\0\4k"..., 2000, 0, {sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr("5.1.4.129")}, [16]) = 390
[pid  4202] recvfrom(15, "E\0\0019fL@\0\377/\3E\5\1\3\201\5\1\3\202\0\0\211!\1\0\16\1\3\0\4f"..., 2000, 0, {sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr("5.1.3.129 ")}, [16]) = 313
[pid  4202] sendmsg(15, {msg_name(16)={sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr("5.1.3.129")}, msg_iov(3)=[{"E\0006\0\0\0\0\0@/\3133\5\1\3\202\5\1\3\201\0\0\211!", 24}, {"\1\0\16\5\3\0\10\202\363\0\0\0\25\0\0\0n", 17}, {"\3\0\n\1\0\0\0\1\350\0\1\265\317", 13}], msg_controllen=0, msg_flags=0}, 0) = 54


Strace output from dpss for router r4
[pid  4169] recvfrom(15, "E\0\1\274fn@\0\377/\2\240\5\1\3\201\5\1\3\202\0\0\211!\1\0\16\1\3\0\4f"..., 2000, 0, {sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr("5.1.3.129")}, [16]) = 444
[pid  4169] recvfrom(15, "E\0\1\324k9@\0\377/\373\274\5\1\4\201\5\1\4\202\0\0\211!\1\0\16\1\3\0\4k"..., 2000, 0, {sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr("5.1.4.129")}, [16]) = 468
[pid  4169] sendmsg(15, {msg_name(16)={sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr("5.1.4.129")}, msg_iov(3)=[{"E\0006\0\0\0\0\0@/\3131\5\1\4\202\5\1\4\201\0\0\211!", 24}, {"\1\0\16\5\3\0\10Y\320\0\0\0\10\0\0\0\4", 17}, {"\3\0\n\1\0\0\0\1\235\0\1\211\345", 13}], msg_controllen=0, msg_flags=0}, 0) = 54

Best regards
Viktor

Einar Nilsen-Nygaard:
Viktor,

This must be
with a 1.0 SDK, as I guess you don't have the 1.1 version yet. When you
do, you will find that the dpss_mp won't allow itself to be started
multiple times as it now creates a pid file to prevent that happening
inadvertently.

To explain why you see what you see, the dpss_mp opens a raw socket thus:

    socket (PF_INET, SOCK_RAW, IPPROTO_GRE)

Hopefully this explains the netstat output you got and explains Joe's query.

With
the 1.1 SDK, a single dpss_mp can definitely handle multiple vIOS/real
router instances, so I echo Joe's query. Is this for scale? Or some
other reason. Incidentally, while you may be able to start multiple
instances, it probably won't work as we intended the dpss_mp to be a
singleton, so you will most likely find that you will get odd failures,
hence why we put in checks in 1.1. And in 1.2 we will be refining the
installation process somewhat, allowing the dpss_mp to be configured to
start automatocally and be restarted as necessary in the way expected
from typical Linux daemon processes.

Having said all that, you
seem to have two processes up (which will be allowed with the 1.0
version), but while you have multiple routers up & running, do you
actually see activity from both dpss_mp instances? E.g. tracking their
activity with "-d all" on and seeing both instances handling packets.

Cheers,

Einar


Subject: RE: Configuration of multiple dpss processes within a single computer
Replied by: Viktor S. Wold Eide on 16-12-2013 07:02:20 AM
Hi Einar,

It is good to see that you recognize the need. We hope that the onepk team will address this issues for the 1.1 version, as it is important in general for development and test.

We know that it might be possible to do this differently, and maybe also with LXC as you mention. Without thinking it through, we have also thought about whether it would be possible for you to co-locate both the vIOS router and a dpss within the same VM. More specifically to have both a vIOS and a dpss process within each of the /usr/bin/qemu-system-x86_64 VMs in the all_in_one_VM.

However, it is important for us that the test and development environment matches the real deployment as closely as possible, to make sure that we can catch problems as early as possible. Given that a dpss process has to be executed co-located to our onep application in our deployment, this is the setup that we would like to have for the test and development environment as well. Additionally, we do not think that other ways of achieving this should exclude the opportunity to have multiple dpss_mp processes running concurrently, although there certainly are tradeoffs that we are unware of.

Please let us know in what way you think this should be handled.

Best regards
Viktor

Einar Nilsen-Nygaard:
Viktor,

Yes, I can see why you might want to take this approach, and I mostly agree with the goals.

In
terms of a potentially alternative way of achieving the same thing,
have you considered using something like LXC? This would give you
lightweight virtualization in the context of a single either VM or
bare-metal deployment. I haven't fully thought this through, but maybe
using something like Vagrant and it's LXC provider
(https://github.com/fgrehm/vagrant-lxc/blob/master/README.md) would be
useful. It would also be possible to do this more manually, but the idea
of using Vagrant appeals because of the flexibility and repeatability
of quickly deploying new instances. Note that on some Cisco platforms we
support using LXC internally. You may have heard us (but not me
specifically) talk about "service containers"?

Might require some
thinking on the networking side for dealing with GRE traffic, but I
think this approach is something I would prefer to going down the path
of multiple dpss_mp instances running in the same OS context.

What do you think as a potential way forward?

Cheers,

Einar


Subject: RE: Configuration of multiple dpss processes within a single computer
Replied by: Einar Nilsen-Nygaard on 18-12-2013 09:12:18 AM
Viktor S. Wold Eide:
Hi again,

Thanks a lot for sharing your considerations and plans. It's appreciated.

Our overall first reaction is that collapsing the dpss_mp functionality into the client process would be good for a number of reasons.

As you write, the client process has to understand more about initializing the transport. That's reasonable. Currently, separate configuration information is required in dpss.config file(s) and for the router(s), which complicates configuration and maintenance.
Good, I'm glad you have the same understanding of the deficiencies of the current approach that we have!
Viktor S. Wold Eide:
We would prefer getting access to the file descriptor(s), as we can then choose whether to integrate this into our own event handling loop(s) (e.g., using libev) or handle it by separate thread(s). This would improve the flexibility compared to the current interface, and as you write, provide more control with respect to the concurrency design.
Excellent.
Viktor S. Wold Eide:
One-to-many, many-to-one, and many-to-many interaction between client app(s) and router(s) should be supported. We're not sure in what respect removing the dpss_mp process would affect this. Currently, if multiple co-located client apps register interest in the same packets from a router, only a single instance of the packet could be sent from the router to the dpss_mp process, which would then provide a copy to each client app. I'm not sure how important this case is, and it should also be possible to realize without the dpss_mp process itself.
We term the idea of multiple collocated apps interested in the same packet(s) "service chaining". This is something we had as a goal for the DPSS overall, but a number of factors have dissuaded us from implementing this as yet, and we currently have no concrete plans to implement it for a variety of reasons, one being the lack of concrete requirements. We have been working with an internal team who want to deliver something that could be called "service chaining", but the use case is more constrained and, as a result, is essentially handled by the application internally by the addition of modules.

What this means is that for the foreseeable future the model, with or without the dpss_mp, will stay the same -- only a single end-user application will be able to register for packets on a specific (interface, location), where location is either input or output.
Viktor S. Wold Eide:

It is good to hear that you can remove the dpss_mp process by purely client-side enhancements, as the transition would potentially be a lot more smooth. Obviously, it is important for our ongoing development work that the data path service set is functional/supported while you collapse the dpss_mp functionality into the client process.

There are also other issues to consider, including e.g., security and fault handling. Regarding fault handling, having fewer processes to monitor should be a benefit.
Fault handling is an area we hope to improve on. As you have likely experienced, fault handling today is quite severe, and usually results in the application being taken down or having to be restarted.

It would be helpful if you could provide thoughts on the security side. As you are probably aware, the onePK control channel has the TLS option, both one-way and two-way, to ensure the authenticity and privacy of control plane message, but as yet we can see no compelling argument for applying encryption to the data packets being transferred over the DPSS data path. Our secyrity team has analysed the capabilities of the DPSS and, so far, has come to the conclusion that end applications may do nothing more damaging than may already be achieved by injecting packets on the wire. As far as privacy of the packets extracted from a router are concerned, our current advice is that if the packets will be transmitted across a "trust boundary", then the end user should ensure that appropriate link protection is applied. E.g. using 802.1ae at the link level or protecting the L3 path using, for example, IPsec. Again, interested in your feedback here.
Viktor S. Wold Eide:

More in-depth information would be useful and interesting in addition to some information regarding expected time line.
As we move forward, I am happy to share what technical details I can. Timelines can also be shared when we have a committed plan.

Cheers,

Einar

Subject: RE: Configuration of multiple dpss processes within a single computer
Replied by: Viktor S. Wold Eide on 18-12-2013 07:40:09 AM
Hi again,

Thanks a lot for sharing your considerations and plans. It's appreciated.

Our
overall first reaction is that collapsing the dpss_mp functionality
into the client process would be good for a number of reasons.

As
you write, the client process has to understand more about initializing
the transport. That's reasonable. Currently, separate configuration
information is required in dpss.config file(s) and for the router(s),
which complicates configuration and maintenance.

We would prefer
getting access to the file descriptor(s), as we can then choose whether
to integrate this into our own event handling loop(s) (e.g., using
libev) or handle it by separate thread(s). This would improve the
flexibility compared to the current interface, and as you write, provide
more control with respect to the concurrency design.

One-to-many,
many-to-one, and many-to-many interaction between client app(s) and
router(s) should be supported. We're not sure in what respect removing
the dpss_mp process would affect this. Currently, if multiple co-located
client apps register interest in the same packets from a router, only a
single instance of the packet could be sent from the router to the
dpss_mp process, which would then provide a copy to each client app. I'm
not sure how important this case is, and it should also be possible to
realize without the dpss_mp process itself.

It is good to hear
that you can remove the dpss_mp process by purely client-side
enhancements, as the transition would potentially be a lot more smooth.
Obviously, it is important for our ongoing development work that the
data path service set is functional/supported while you collapse the
dpss_mp functionality into the client process.

There
are also other issues to consider, including e.g., security and fault
handling. Regarding fault handling, having fewer processes to monitor
should be a benefit.

More in-depth information would be useful and interesting in addition to some information regarding expected time line.

Best regards
Viktor

Einar Nilsen-Nygaard:
Viktor,

I think I
need to understand a little more of your desired deployment scenarios
before deciding on the best way to handle your requirements. For a
variety of reasons, we are actively considering the option of collapsing
the dpss_mp into the client process, essentially giving clients a
library that they can use without the need for a dpss_mp process at all.
There are multiple reasons for this that I can go into separately if
you like, but the net result of this approach would be that client
processes would have more direct control over the exact configuration of
where packets flow between the application(s) and the router(s) in
exchange for taking more responsibility for the initial configuration.
IOW, you'd have to get your hands a little more dirty at both
initialization time and during packet processing. Let me list out some
of the benefits and downsides:
  • Client process has to understand more about initializing transport.
  • Client
    would have to provide a "main loop" (i.e. do file descriptor polling,
    maybe using libevent or by rolling your own main loop).
  • The
    dpss_mp is gone as a separate process, removing the need for shared
    memory, removing unix domain socket comms between it & client, etc.
  • Fewer IPC hops.
  • Client apps can still interact with multiple routers.
  • Multiple clients can still interact with the same router (but still subject to the restrictions of the PSS).
  • Clients would have more control over the overall concurrency design.
  • No updates required to router software; purely client-side enhancements.
There
are probably more things I could highlight, but I think this is enough
to give you the flavour. I would far prefer investing in this path
rather than investing in making multiple instances of the dpss_mp a
viable proposition. At this point I think we will have provided
appropriate flexibility for applications.

How does this direction suit your deployment scenarios, such as you are able to discuss?

Cheers,

Einar

Attachments

    Outcomes