[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [xmlblaster-devel] Connections/CallbackServer/Subscriptions



Hi, I now get VERRY fast through put. 10 000 subscibers on 20
connection:  1470 subscribers updated per second.

You know what I did? I did not run the test through Ant/Junit, but on
commandline. Strange, ain't it?

//Peter

On 27 Sep, Marcel Ruff wrote:
> Peter Antman wrote:
> 
>>Sorry for the noice. I am stupid, a small programmimg error ;-(
>>
>>I however find it verry interesting that there seems to be no tradeoff
>>in speed between having all subscribers on one connection or distributed
>>on several connections.
>>
>>My last round gave me:
>>
>>5000 subscriber one connection and cb.queue.maxMsg=6000
>>a throughput of 28 updated subscribers/second
>>
> Only 28 msg/sec overall update seems to be veeery slow.
> I have on my 600 MHz CPU up to 790 msg/sec updated
> with SOCKET protocol.
> (client is on another machine).
> 
> If somebody wants to chat about this topic try a:
> 
> java -cp lib/xmlBlaster.jar  javaclients.chat.SimpleChat  -loginName 
> MYNAME  -client.protocol SOCKET -socket.hostname www.xmlBlaster.org 
> -socket.port 9455
> 
> (replace the MYNAME by a login name)
> 
>>
>>And 5000 subs on 10 con (500 each) with cb.queue.maxMsg=1000
>>a throughput of 29 updated subscribers/second
>>
>>//Peter
>>
>>On 27 Sep, Till: xmlblaster-devel at server.xmlBlaster.org wrote:
>>  
>>
>>>Hi, now I have at least found out something more. It seems as if the
>>>cb.queue.maxMsg setting is a player in this. With the default setting
>>>1000 the queue gets full when having more subscribers. 
>>>
>>>I tried to set it programatically when logging in:
>>>
>>>                     manyConnections[ci] = new XmlBlasterConnection(glob);
>>>                     ConnectQos loginQosW = new ConnectQos(glob); // "<qos></qos>"; During login this is manipulated (callback address added)
>>>                     // If we have many subs on one con, we must raise the max size of the callback queue!
>>>                     CbQueueProperty cbProp =loginQosW.getCbQueueProperty();
>>>                     cbProp.setMaxMsg(breakPoint*1000);
>>>                     log.trace(ME,"Login qos: " +  loginQosW.toXml());
>>>                     manyConnections[ci].login(sub.loginName, passwd, loginQosW, this);
>>>
>>>But is does not help. Not until I set cb.queue.maxMsg=5000 in
>>>xmlBlaster.properties it does start to work. I must also say, that even
>>>if I distribute the 5000 subscribers on 10 connections, it does look as
>>>if they all are put into the same queue:
>>>
>>>[Sep 27, 2002 11:32:10 AM DUMP  CbConnection-/node/http:80.72.2.80:3412/client/Tim/13] CallbackQos=
>>><qos>
>>> <sender>Tim</sender>
>>> <priority>5</priority>
>>> <subscriptionId>__subId:5209</subscriptionId>
>>>   <rcvTimestamp nanos='1033119119183000000'/>
>>> <queue index='4791' size='4794'/>
>>> <route>
>>>      <node id='http://80.72.2.80:3412' stratum='0' timestamp='1033119119183000000' dirtyRead='false'/>
>>> </route>
>>></qos>
>>>[Sep 27, 2002 11:32:10 AM DUMP  CbConnection-/node/http:80.72.2.80:3412/client/Tim/13] CallbackQos=
>>><qos>
>>> <sender>Tim</sender>
>>> <priority>5</priority>
>>> <subscriptionId>__subId:5207</subscriptionId>
>>>   <rcvTimestamp nanos='1033119119183000000'/>
>>> <queue index='4793' size='4794'/>
>>> <route>
>>>      <node id='http://80.72.2.80:3412' stratum='0' timestamp='1033119119183000000' dirtyRead='false'/>
>>> </route>
>>></qos>
>>>[Sep 27, 2002 11:32:10 AM TRACE CbConnection-/node/http:80.72.2.80:3412/client/Tim/13] Before update 4794 acknowledged messages ...
>>>
>>>It is as if they are all placed in the same callback queue despite the
>>>fact that subscriptions is made on different connections...
>>>
>>>Any idea on this (should I perhaps check the testcode in?)
>>>
>>>//Peter
>>>
>>>On 26 Sep, Till: xmlblaster-devel at server.xmlBlaster.org wrote:
>>>    
>>>
>>>>On 26 Sep, Marcel Ruff wrote:
>>>>      
>>>>
>>>>>Hi Peter,
>>>>>
>>>>>usually after some 10000 msg sent the performance has is max. saturation.
>>>>>
>>>>>        
>>>>>
>>>>>>>Message/second updated
>>>>>>>            IOR            SOCKET                   RMI
>>>>>>>            
>>>>>>>
>>>>XMLRPC
>>>>      
>>>>
>>>>>>>     oneCon  manyCon               oneCon  manyCon          oneCon  manyCon
>>>>>>>            
>>>>>>>
>>>>        oneCon  manyCon
>>>>      
>>>>
>>>>>>>100    35     27
>>>>>>>500    20     2                      11      0               21      2
>>>>>>>            
>>>>>>>
>>>>        -       -
>>>>      
>>>>
>>>>>Does this mean that  for IOR oneCon 500 x 20 = 10 000 msg/sec are updated?
>>>>>        
>>>>>
>>>>No it actually say that for one message sent, it manages to update 20
>>>>subscribers per second. 500 subscribers with same query, 1 message takes
>>>>500/20 to seconds top reach all subscribers.
>>>>
>>>>At least a hope that is what I measure. Maybe I should commit the code
>>>>tomorrow so you can see what I am doing: Almost all of it is taken from
>>>>other XmlBlaster test, especially the many subscriber test i qos.
>>>>
>>>>What is really anoyning is that on several occasion XmlBlaster reaches a
>>>>statate where it stopes uptading the subscriber. I can see the server
>>>>starting pinging and the subsribers whaiting, but no new messages. I
>>>>have had this scenario both for RMI and IOR when having 2000 subscribers
>>>>on one connection, but also in a new test where I had 5000, but
>>>>distributed among 10 connectione, i.e 500 subscibers/connection.
>>>>
> This seems to be a serious bug.
> 
>>>>
>>>>Here I reached about 2500 updates before it stoped. In  the IOR and RMI
>>>>cases metioned above, XmlBlaster stoped publishing at 1500-1700 updated
>>>>clients. I guess there is a real possibility there is a deadlock
>>>>somewhere...I dump the stack tomorrow to see.
>>>>
> Yes this would be nice (with an up to date version so the code is 
> comparable).
> 
> Marcel

-- 
------------------------------------------------------------
Peter Antman	Chief Systems Architect, Business Development
Technology in Media, Box 34105 100 26 Stockholm
WWW: http://www.tim.se	WWW: http://www.backsource.org
Email: pra at tim.se	 
Phone: +46-(0)8-506 381 11 Mobile: 070-675 3942 
------------------------------------------------------------