Bhavith C Acharya
Bhavith C Acharya

Reputation: 355

Which is the best way of implementing TCP/UDP server ? Either handle each incoming request in thread or process

Hi I am implementing TCP server. The requests are names of functions. I need to execute the function which is present in some library where server is running. There may be a chance that library function may cause segmentation fault or floating point exceptions. I plan to implement the function call operation in a separate process. So any crash will cause the child process to die.

My question is whether doing such operation in process or using threads is better?

Also anybody please let me know how can I restart my server application when it crashes. I wrote restart.conf and kept under /etc/init/ but it is restarting only when system reboot not when application crashed. I don't want to do it in do-while

Upvotes: 0

Views: 114

Answers (4)

red0ct
red0ct

Reputation: 5055

Perhaps there is a problem in your approach. IMHO a network part should be tested separately from a triggers part, which must be debugged and tested on different data sets. Segmentation fault is not a normal result for your library functions. It is out of the ordinary. For small servers with well-written and not expensive triggers would be enough use only the poll function. To increase performance poll + fork (clone some workers, each of which with polling loop). That's for start.

Also you need to become familiar with epoll/kqueue functions. Select/poll loops can help us to use every forked process more effectively. But that's another long-standing dispute story


You can manage your network services by using xinetd

Upvotes: 3

Jean-Baptiste Yunès
Jean-Baptiste Yunès

Reputation: 36391

Use process if you want to isolate crashes from the main waiting loop.

To restart your server, use inetd.

Upvotes: 0

Klas Lindbäck
Klas Lindbäck

Reputation: 33273

The same design question was raised about 25 years ago when deciding on how to implement CGI scripts in web servers.

The answer back then was to start a new process for each cgi script call. The ratinale was that a buggy CGI script shouldn't be able to bring down the entire web server.

When it comes to performance the impact depends on which OS you are using. In Linux/Unix-based systems (including mac/osx), creating a new process is fairly cheap, while in Windows it is fairly expensive.

If you expect certain calls to crash the process, then you should definitely go for creating a new process for each call. That way your server will continue running through those bad calls.

Upvotes: 3

deeiip
deeiip

Reputation: 3379

The reason of using thread instead of process is because thread is lightweight. You can not notice the difference if your server is handling few requests. So, If you are handling large number of requests go for thread, otherwise anything is fine.


To do a restart you can catch SIGSEGV, and then do whatever you want with system(). But you have to decide do you really want to do this? You'll never know what was corrupted? and when? Because

The behavior of a process is undefined after it returns normally from a signal-catching function for a [XSI] SIGBUS, SIGFPE, SIGILL, or SIGSEGV signal that was not generated by kill(), [RTS] sigqueue(), or raise().

Upvotes: 1

Related Questions