From bogus@does.not.exist.com Tue Apr 3 04:48:41 2007 From: bogus@does.not.exist.com () Date: Tue, 03 Apr 2007 04:48:41 -0000 Subject: No subject Message-ID: programming in general, the standard library functions are better. With the exception of OS/2 Warp, the C standard library is standard across platforms (meaing, yes there are platform-specific extensions, but the stdlib functions will be the same/work the same)--especially platforms billed as being POSIX-compliant. Unix system calls may be a smidge quicker wrt execution time. However, they are a pain to use in many cases, and in most cases, you're not going to write code which is better written than the stdlib functions. This is because the stdlib functions are basically wrappers around the system calls. If you look at the glibc code for 'fopen()'--a stdlib call, for example, you'll see that it eventually calls 'open()'--a system call. Also, some (most?) of the stdlib I/O functions are buffered--meaning less time spent bringing in the next chunk of data from the file/device/socket/whatever. IMHO, considering the various tradeoffs between using the stdlib vs. system calls, there isn't really much difference as far as speed goes. But the stdlib program will be a helluva lot easier to maintain and port! -- Matthew Vanecek perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' ******************************************************************************** For 93 million miles, there is nothing between the sun and my shadow except me. I'm always getting in the way of something... From bogus@does.not.exist.com Tue Apr 3 04:48:41 2007 From: bogus@does.not.exist.com () Date: Tue, 03 Apr 2007 04:48:41 -0000 Subject: No subject Message-ID: is post mortem core debugging, but I don't see what can be the batch version. Thanks -- Vincent Penquerc'h ------_=_NextPart_001_01C1AE2F.48D26EB0 Content-Type: text/html; charset="windows-1252" RE: [LCP]segmentation fault core dump

May I ask what is a batch debugger ?
From what I understand, what you call a non interactive debugger
is post mortem core debugging, but I don't see what can be the batch
version.
Thanks

--
Vincent Penquerc'h

------_=_NextPart_001_01C1AE2F.48D26EB0-- From bogus@does.not.exist.com Tue Apr 3 04:48:41 2007 From: bogus@does.not.exist.com () Date: Tue, 03 Apr 2007 04:48:41 -0000 Subject: No subject Message-ID: ... You use ioperm(2) or alternatively iopl(2) to tell the kernel to allow the user space application to access the I/O ports in question. Failure to do this will cause the application to receive a segmentation fault. ... -- | \ \ | ___|_ |_ | ianezz AT sodalia.it | _ \ | \ | _| / / Visita il LinuxTrent a _|_/ _\_| _|____|___|___| http://www.linuxtrent.it From bogus@does.not.exist.com Tue Apr 3 04:48:41 2007 From: bogus@does.not.exist.com () Date: Tue, 03 Apr 2007 04:48:41 -0000 Subject: No subject Message-ID: O_EXCL When used with O_CREAT, if the file already exists it is an error and the open will fail. In this context, a symbolic link exists, regardless of where its points to. O_EXCL is broken on NFS file systems, programs which rely on it for performing locking tasks will contain a race condition. The solution for performing atomic file locking using a lockfile is to create a unique file on the same fs (e.g., incorporating hostname and pid), use link(2) to make a link to the lockfile. If link() returns 0, the lock is successful. Otherwise, use stat(2) on the unique file to check if its link count has increased to 2, in which case the lock is also successful. (apologies for the crap formatting). I believe this is a very cross-platform widely used solution to the brokenness of NFS. If this isn't a solution - why are you depending on valid timestamps on the lock files? The source files, I could understand. If you need reliable mtime/ctime updates, I believe Linux's 'noac' NFS mount option might be what you want. I don't know about other platforms. As the previous poster said, a network daemon might be a better idea. Even have the daemon do the source file I/O, and get rid of NFS entirely. --m@ On Mon, 15 Jul 2002, Jack Lloyd wrote: > > Problem: > > We need to do some kind of locking such that one server runs over a > particular set of files (in this case, a source code repository). More than > one server can run, just not over the same repository. > > Current Solution: > > Do a lock file. It works, but only on a local filesystem (since we depend > on good timestamps and whatnot). > > New Problem: > > Need to make sure that the repository is on a local filesystem. > > Current Solution: > > Use statfs and check for NFS_SUPER_MAGIC. The problem is that NFS is not > the only remote file system in existence, last I checked. Is there a better > solution to this beyond just checking for CODA_SUPER_MAGIC, > SMB_SUPER_MAGIC, etc along with NFS_SUPER_MAGIC? > > Offtopic to this list: > On *BSD, we do a statfs and check the flags for MNT_LOCAL. And everywhere > else, we punt and say "yes, it's local". Anyone know of a way to check for > this on Solaris, IRIX, etc? I can't find anything at all for this. > > > _______________________________________________ > This is the Linux C Programming List > : http://lists.linux.org.au/listinfo/linuxcprogramming List > From bogus@does.not.exist.com Tue Apr 3 04:48:41 2007 From: bogus@does.not.exist.com () Date: Tue, 03 Apr 2007 04:48:41 -0000 Subject: No subject Message-ID: RETURN VALUE On success, the number of bytes read is returned (zero indicates end of file), and the file position is advanced by this number. It is not an error if this number is smaller than the number of bytes requested; this may hap pen for example because fewer bytes are actually available right now (maybe because we were close to end-of-file, or because we are reading from a pipe, or from a terminal), or because read() was interrupted by a signal. On error, -1 is returned, and errno is set appropriately. In this case it is left unspecified whether the file position (if any) changes. > __________________________________ > Do you Yahoo!? > The New Yahoo! Search - Faster. Easier. Bingo. > http://search.yahoo.com > _______________________________________________ > This is the Linux C Programming List > : http://lists.linux.org.au/listinfo/linuxcprogramming List From bogus@does.not.exist.com Tue Apr 3 04:48:41 2007 From: bogus@does.not.exist.com () Date: Tue, 03 Apr 2007 04:48:41 -0000 Subject: No subject Message-ID: ... > void child (prog *run, char *runthis) > { > close(STDOUT); > dup( run->cp[OUTPUT]); > close(STDIN); > dup(run->pc[INPUT]); You should close and dup for stderr here as well, since any errors will go to the tty, which could raise a SIGTTOU signal if the child process has been disassociated from the tty. > close(run->pc[OUTPUT]); > close(run->cp[INPUT]); You're closing the parents descriptors here, like you should, but you do not do the reverse of this in the parent. Make the default action for the fork close the run->cp[OUTPUT] and run-pc[INPUT] before you return from the runthis() function. > execlp (runthis, NULL); > printf ("ERROR!\n"); > exit(1); > } ... You may also wish to look at socketpair(), which creates a bi-directional socket/pipe which is less work to setup than multiple pipe()s. - Steve