What the heck?

S. Dale Morrey sdalemorrey at gmail.com
Thu Dec 12 14:16:06 MST 2013


Nevermind, I got it sorted out.  The delay wasn't long enough and the feed
server was treating it as an attack.
Furthermore there was no case to catch the connection reset so it was just
silently filling the array with undefined values, since it never reached
the part where it splits the hashes into messages, there was no output to
be had.

Now I've got to figure out how to slowdown the requests and gradually feed
them to the server.  Batching will help somewhat, but the max I can send in
a batch is 100 and even then it's going to quickly overwhelm the server to
send them out without a delay between sends.

Another option is to get rid of the loop entirely and just have the
callback calling in a cycle, but I'm afraid that's going to smash the stack
:(
Anyways, I'll sort it out.


On Thu, Dec 12, 2013 at 11:30 AM, S. Dale Morrey <sdalemorrey at gmail.com>wrote:

> So this probably belongs on StackOverflow, but I figured there was a
> chance someone on the list might be able to help me figure out what I'm
> missing here.
>
> I have an app I refactored to take advantage of asynchronous called in
> node.js.
> Now my log is full of this...
>
> Setting up 267176 of 267180 block hash requests.
> Setting up 267177 of 267180 block hash requests.
> Setting up 267178 of 267180 block hash requests.
> Setting up 267179 of 267180 block hash requests.
>
> The function spewing this out is an iterator that is calling setTimeout to
> run these requests with about 500ms of delay (so as not to swamp the server
> I'm getting the request from.  Once it has the result of the block hash it
> should be feeding it to a message queue, but it's not.
>
> I would expect that at sometime during this process the timer would fire
> and I should see debug spew from the function being called by the timer.
>  Fact is that I'm not.
>
> Here is the function itself...
> function blockCountCallBack(err, blockcount){
>
>     console.log("BlockCount: ", blockcount);
>     for(var i = lastblock; i < blockcount; i++){
>         //Multiplex across multiple clients so we don't overwhelm a single
> server
>         blockDB.put("lastblock",i);
>         var clientnum = 0;
>         if(i >= clients.length){
>           clientnum = i%clients.length;
>         }
>         console.log("Setting up "+i+" of "+blockcount+" block hash
> requests.");
>         setTimeout(function(){
>                 clients[clientnum].getBlockHash(i,addBlockToMessage);
>         },500);
>     }
> }
>
>
> And here is the addBlockToMessage function that should be getting called,
> but never does.
> //Adds blockhashes to the blocks array
> //When the array reaches it's max size (currently 7KB), it creates an SQS
> message and sends it off.
> //It then clears the blocks[] so the whole process can start over again
> function addBlockToMessage(err, blockhash){
>
>     //This error handling isn't likely to work, we need somehow verify
> that we are going to preserve the number itself.
>     //This is WAY too fragile!!!
>     if(err){
>         console.log(err);
>         if(err.code ==="ECONNREFUSED"){
>             console.log("link appears to be down.  Sleeping for 5
> minutes");
>             var blockid = blockhash;
>             setTimeout(function(){
>                 fetchBlockID(blockid);
>                 },FIVEMINUTES);
>             return;
>         }
>     }
>
>     //We should batch these into a couple hundred, just check the byte
> length and not exceed 7k
>     blocks.push(blockhash);
>
>     var msgContent = JSON.stringify(blocks);
>     var msgLength = msgContent.length;
>     if(msgLength >= 7000){
>         console.log("Sending: ",msgContent);
>         //We've reached the size limit
>         //Now we setup and send the SQS message
>         AWS.Request.send(sqs.SendMessage(qURL, msgContent));
>         //Finally clear blocks
>         blocks = [];
>     }
>
> }
>
> As you can see there should be plenty of debug log output, but all this
> thing is doing is iterating.
> Any thoughts?
>


More information about the PLUG mailing list