From: "Kevin S. Martin" <ksmartin@fnal.gov>
To: ecos-discuss@sources.redhat.com
Subject: [ECOS] DEBUG: Circular MBUF
Date: Fri, 25 Jun 2004 16:38:00 -0000 [thread overview]
Message-ID: <40DC551B.40607@fnal.gov> (raw)
I have a application that opens a TCP/IP socket connection and then at
1Hz writes a bunch of data out on that connection. If "bunch" is < 1000
bytes (approx) then everything works fine however when "bunch" > 1000
(i.e. 2000 or 8000+ bytes) then very quickly I get a series of messages
on the console like:
DEBUG: Circular MBUF 0x004c7e80!
DEBUG: Circular MBUF 0x004c8500!
DEBUG: Circular MBUF 0x004c7e00!
DEBUG: Circular MBUF 0x004c7c80!
After I get these messages I assume that the network thread is in an
infinite loop because all threads with lower priority never run again
and all networking to/from the target stops.
I'm using a i386 PCMB target with a fairly recent version of eCos
(April 2004) from the CVS repository. Also, I'm using the FreeBSD
networking stack. I've tried increasing the amount of memory designated
for networking buffers but this didn't help.
The code I'm using to open the connection and write is (abbreviated):
// open TCP socket
if ( (fdListen = socket(AF_INET, SOCK_STREAM, 0)) < 0 )
{
perror("can't open stream socket");
exit(1);
}
// if the packet is "small" don't wait to send it (i.e. send it now)
if( setsockopt( fdListen, IPPROTO_TCP, TCP_NODELAY, (char *)&yes,
sizeof(yes) ) == -1 )
{
perror("setsockopt:TCP_NODELAY");
exit(1);
}
// lose the pesky "address already in use" error message
if (setsockopt(fdListen, SOL_SOCKET, SO_REUSEADDR, &yes,
sizeof(yes)) == -1)
{
perror("setsockopt:SO_REUSEADDR");
exit(1);
}
// bind our local address so client can connect to us
bzero( &serv_addr, sizeof(serv_addr) );
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = INADDR_ANY;
serv_addr.sin_port = htons(TCP_PORT1_TO_USE);
if ( (reterr = bind(fdListen, (struct sockaddr *)&serv_addr,
sizeof(serv_addr))) < 0 )
{
perror("can't bind local address");
exit(1);
}
// listen
if (listen(fdListen, 1) == -1)
{
perror("listen");
exit(1);
}
addrlen = sizeof(cli_addr);
// main loop
for(;;)
{
// handle new connections
if ((newfd = accept(fdListen, (struct sockaddr *)&cli_addr,
&addrlen)) == -1)
{
perror("accept");
}
else
{
debug_printf("\nnew connection from %s on socket %d",
inet_ntoa(cli_addr.sin_addr), newfd);
debug_printf("\nFeed is ON");
while ()
{
.
.
.
if ((nwritten = write(newfd,buffer,size)) < 0)
{
perror("\nwrite()");
break;
}
.
.
.
cyg_thread_delay(100); // delay for one second
}
debug_printf("\nFeed is OFF.");
close(newfd); // bye!
}
}
Any ideas?
Thanks,
Kevin
--
Kevin S. Martin
Fermi National Accelerator Laboratory
Accelerator Division, EE Support Department
630.840.2983
ksmartin@fnal.gov
--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss
next reply other threads:[~2004-06-25 16:38 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-06-25 16:38 Kevin S. Martin [this message]
2004-06-25 16:45 ` Gary Thomas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=40DC551B.40607@fnal.gov \
--to=ksmartin@fnal.gov \
--cc=ecos-discuss@sources.redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).