[LCP] : gcc socket causes virtual memory growth

Anton V. Demidov demid at kemsu.ru
Sat Nov 17 12:54:03 UTC 2007


Sorry for disinformation in my previous message. Further research showed that the virtual
memory taken by the client process is not large actually. The large
amount of memory is taken by forked copy of the server process. Fork()
is called after a new connection is accepted by the server's socket.
So the client is small and it takes just a few Mb of memory. That
makes no clear why it produces errors and halts. The error is occured
when I try to create a new stringstream like that:

                            //stream to convert int to string
                            stringstream  ss;
                            ss << t->task_id << ::std::endl;
                            string s_tmp;
                            ss >> s_tmp;
                            ss.clear();

This code works perfectly many times in my program but stops to work
in a particular case suddenly. Since this code is obviously correct I
suggest the problem is not in the code. If I try to create a structure
calling "new" function before given code the client halts before
creating this stringstream. It appears the OS doesn't let me to use
more memory. ulimit shows no limitation of memory usage.
There's an idea that my process should give some time for OS to clear
system buffers after working with socket. So I need some analog of
java's yield() function. Usual sleep() doesn't help. Is there any
other way to force a process to release CPU?


And one more question. This list is dedicated to C programming so can
I ask C++ questions or it is wrong place for this?

Best regards
Anton Demidov
mailto:demid at kemsu.ru


Friday, November 16, 2007, 8:54:57 PM, I wrote:

AVD> Dear colleagues
AVD> I subscribed to this list some time ago and still haven't got a chance to
AVD> help anyone, but I hope you could give me some idea about the issue
AVD> which drives me crazy.
AVD> I develop a client-server application where two parts connect via
AVD> socket. The server sends structures which parses at the client side
AVD> and the client reply by structures to the server.
AVD> Suddenly my client started to halt producing the segmentation fault
AVD> error. After a few days of checking my code (which is rather simple
AVD> though) I discovered that the problem is probably caused by the
AVD> virtual memory exhaustion.
AVD> Beiing started the server opens connection to the Oracle DB (using
AVD> OCCI), creates a sockets and starts to listen it. After that acording
AVD> to ps the server process uses 89772mb of the virtual memory
AVD> (considerably large amount of memory, could it be caused by OCCI
AVD> library?).
AVD> Being started the client creates the socket and makes connection to
AVD> the server. It doesn't use OCCI or other library except stdc++. But it
AVD> takes EXACTLY same amount of virtual memory! I can't find an
AVD> explanation to this fact. Adding sleep() functions and monitoring top
AVD> I found out, that the client process increases memory usage just the
AVD> time of call connect() function. After bind() it's still out of the
AVD> top, but after connect it's there. After that in the main cicle I try
AVD> to create some new structures and get the segmentation fault error.
AVD> If I remove all logic and leave just connect and shoutdown functions
AVD> my client and server don't eat so much memory. But I can't get why my
AVD> logic affects process's size at the moment of connect and before this
AVD> logic starts working.
AVD> Sorry for such long message, I hope you're tired reading it. If I
AVD> explained some points not clear enough just ask me to explain it
AVD> again. I'm totally desperated by this issue and request for your help.

AVD> This is the source code of my client application. I removed all inner
AVD> logic and left just socket work.

AVD> it's compiled as "gcc -g -L/usr/lib -lstdc++ -Wno-deprecated"

AVD> #include <iostream>
AVD> #include <sys/stat.h>
AVD> #include <errno.h>
AVD> #include <dirent.h>
AVD> #include <fstream.h>
AVD> #include <sys/types.h>
AVD> #include <sys/stat.h>
AVD> #include <unistd.h>
AVD> #include <time.h>
AVD> #include <signal.h>
AVD> #include <cstring>
AVD> #include <string>
AVD> #include <sstream>
AVD> #include <stdlib.h>
AVD> #include <sys/socket.h>
AVD> #include <arpa/inet.h>
AVD> #include <netdb.h>
AVD> #include <netinet/in.h>
AVD> #include "cluster_agent.h"
AVD> #include "VIP_protocol.h"



AVD> using namespace std;

AVD> class cMyClientSocket
AVD> {
AVD>     private: 
AVD>         int SockHandle; // 
AVD>         struct sockaddr_in localAddr, servAddr;
AVD>         int error;
AVD>         struct hostent *h;
        
        
AVD>     public:
AVD>         cMyClientSocket() //constructor
AVD>         {

AVD>         }
        
AVD>         ~cMyClientSocket() //destructor
AVD>         {

AVD>         }
        
AVD>         int sock_create()
AVD>         {
AVD>             h = gethostbyname(SERVER_ADDRESS);
AVD>             if (h == NULL) {return -100;}
AVD>             servAddr.sin_family = h->h_addrtype;//AF_INET;
AVD>             memcpy((char *) &servAddr.sin_addr.s_addr,
AVD> h->h_addr_list[0], h->h_length);
AVD>             servAddr.sin_port = htons(SERVER_PORT);
AVD>             SockHandle = socket(AF_INET, SOCK_STREAM, 0);
AVD>             return SockHandle;
AVD>         }
        
AVD>         int sock_bind()
AVD>         {
AVD>             localAddr.sin_family = AF_INET;
AVD>             localAddr.sin_addr.s_addr = htonl(INADDR_ANY);
AVD>             localAddr.sin_port = htons(0);
AVD>             int n = bind(SockHandle, (struct sockaddr *)
AVD> &localAddr, sizeof(localAddr));
AVD>             return n;
AVD>         }
        
AVD>         int sock_connect()
AVD>         {
AVD>             int n = connect(SockHandle, (struct sockaddr *) &servAddr, sizeof(servAddr));
AVD>             return n;
AVD>         }
        
AVD>         int write(void *buffer, int length)
AVD>         {
AVD>             int n = send(SockHandle, buffer, length, 0);        
AVD>             return n;
AVD>         };
AVD>         int write(struct status buffer, int length)
AVD>         {
AVD>             int n = send(SockHandle, &buffer, length, 0);       
AVD>             if (n == length) {return n;}
AVD>                 else {return -1;};
AVD>         };

AVD>         int read(void *buffer, int length)
AVD>         {
AVD>             int n = recv(SockHandle, buffer, length, 0);        
AVD>             return n;
AVD>         };

AVD>         int read_task(struct task *t)
AVD>         {
AVD>             int size = sizeof(struct task);
AVD>             int n = recv(SockHandle, t, size, 0);       
AVD>             if (n == size) {return 0 ; }
AVD>                 else {return -1;}
AVD>         };


AVD>         void sock_shutdown()
AVD>         {
AVD>             shutdown(SockHandle, SHUT_RDWR);
AVD>         };

AVD>         void sock_shutdown(int sd)
AVD>         {
AVD>             shutdown(sd, SHUT_RDWR);
AVD>         };

AVD> }; //end class cMySock

AVD> //signal's handlers
AVD> void sigterm_handler(int nsig)
AVD> {
AVD>       log_file.write(0,"SIGTERM caught");
AVD>       WORK_FLAG = false;
AVD> };
AVD> void sigint_handler(int nsig)
AVD> {
AVD>       log_file.write(0,"SIGINT caught");
AVD>       WORK_FLAG = false;
AVD> };

AVD> int main (void)
AVD> {

AVD>   signal(SIGTERM, sigterm_handler);
AVD>   signal(SIGINT, sigint_handler);

AVD>   cMyClientSocket *pMySocket = new (cMyClientSocket);
AVD>   cout<< "create="<<pMySocket->sock_create()<<endl;;
AVD>   cout<<"bind="<<pMySocket->sock_bind()<<endl;
AVD>   //in this point it takes little virt. memory
AVD>   int sd = pMySocket->sock_connect();
AVD>   //it comes to top sorted by memory usage
AVD>   cout<<"connect="<< sd <<endl;

AVD>   ......
AVD>   inner logic:
AVD>   while() cicle which reads tasks from the server, performs tasks and
AVD>   replies.
AVD>   ......
  
AVD>   }

  
AVD> -- 








More information about the linuxCprogramming mailing list