# netcat-like stdio to tcp "proxy"? [SOLVED]

## Sadako

I'm playing around with http requests from shell, and I have a working script with both bash's networking and by using netcat/socat with named pipes, but, both nc and socat terminate after one http request and response...

I`m looking for a solution where "proxy" I'm using will keep running and accept new connections after the first one, something neither nc nor socat seems to support unless binding to an ip address.

Running nc or socat within a while loop would do the trick, but I'm looking for something "cleaner", or more efficient, and ideas?

In case it's unclear, this is what I'm using;

```
nc 127.0.0.1 8080 < nc_in > nc_out
```

with nc_in and nc_out two named pipes or fifos.

Thanks.

----------

## Ant P.

net-misc/s6-networking?

----------

## szatox

Server-side nc:

 *Quote:*   

>        --continuous Enable continuous accepting of connections in listen mode, like inetd.  Must be used with --exec to specify the command to run locally (try 'nc6 --continuous --exec cat -l -p <port>' to make a simple echo server).
> 
> 

 

If you want to have many sources use a single netcat client, you can have it read from a fifo. Fifo terminates when the last program on either end exits, and it can be used by many programs at the same time. This means, you can use some idling process to keep the fifo open (say '( read; ) > fifo' )  and then you can even write with echo >fifo and it will not terminate connection even though it exits instantly.

Still, even though sharing a single connection is possible, I'd rather go with one instance of nc/curl/anything per task.

----------

## Sadako

Thank you both very much for your responses, as it turns out I was looking at this wrong, the actual problem was that the http server was closing the connection after serving a single request, which is why netcat was exiting.

I was just testing with busybox httpd, which doesn't support http keep-alive (but I actually knew that, so I have no excuse really...).

Testing with lighttpd and adding the "Connection: keep-alive" header, it just works exactly as I wanted.

 *szatox wrote:*   

> If you want to have many sources use a single netcat client, you can have it read from a fifo. Fifo terminates when the last program on either end exits, and it can be used by many programs at the same time. This means, you can use some idling process to keep the fifo open (say '( read; ) > fifo' )  and then you can even write with echo >fifo and it will not terminate connection even though it exits instantly.

 Yeah, what I was doing was opening a file descriptor to the named pipes I had redirected netcat's stdin and stdout to in the shell script. *Quote:*   

> Still, even though sharing a single connection is possible, I'd rather go with one instance of nc/curl/anything per task.

 Generally, I completely agree on this, but there are always exceptions.  :Wink: 

Anyways, ty both again.

----------

