[Home]

Summary:ASTERISK-21872: high CPU usage ~15 seconds into call if rtpkeepalive set on channels when Asterisk is in a generic bridge and passing RFC2833 DTMF
Reporter:hristo (hristo)Labels:
Date Opened:2013-06-06 10:41:10Date Closed:
Priority:MinorRegression?No
Status:Open/NewComponents:Core/General
Versions:SVN 1.8.17.0 1.8.19.1 1.8.20.0 1.8.22.0 13.18.4 Frequency of
Occurrence
Constant
Related
Issues:
Environment:Debian 6.0 64-bitAttachments:( 0) 2-calls-one-sending-many-dtmfs-asterisk-debug.txt
( 1) forward-stream-first-call-after-asterisk.pcap.txt
( 2) forward-stream-first-call-before-asterisk.pcap.txt
( 3) full.txt
( 4) sample-config.diff
( 5) trafficdump.pcap
( 6) vmstat.txt
Description:If I send several DTMFs to Asterisk, one after the other, fast enough, it blocks other voice RTP packets for as long as several hundred milliseconds. This seems to affects *all* RTP streams on a server.

I can say for sure, that Asterisk is not dropping the RTP packets, because after a while it sends all of them at once. It seems as if they are being held by something, while the DTMFs are being processed/forwarded.

This only occurs in non Packet2Packet mode.

Originally I've seen the problem when several people were connecting to a conference at about the same time and were entering the PIN numbers at about the same time, therefore producing a lot of DTMFs. The conference runs on a dedicate hardware und is unrelated. Asterisk just sits in the middle and bridges the calls. I have managed to reproduce this with only two calls with as little as 10-15 DTMFs, provided they are send fast enough.



Attaching is a debug console log from the following call scenario. In this case both calls were genereted from a dedicated server and terminated on another dedicated server.

Call 1:
A (IP 1.1.1.1) dials 1000 --> Asterisk (IP 2.2.2.2) ---> B (IP 3.3.3.3)

Call 2:
A' (IP 1.1.1.1) dials 2000 --> Asterisk (IP 2.2.2.2) ---> B' (IP 3.3.3.3)



Both calls are active at this point. A' on Call 2 starts sending DTMFs (in this case 40 of them). As a result RTP packets from Call *1* in both directions are delayed by 150-160 ms and are being sent in bursts.

In the logs I often see:
res_timing_timerfd.c:225 timerfd_timer_ack: Expected to acknowledge 1 ticks but got 5 instead

and the CPU is close to 100% (caused by the asterisk process). As soon as all DTMFs are sent, the RTP streams return back to normal with asterisk sending one packet every 20 ms on average.

Attached is also a filtered packet capture that shows only the forward RTP stream on Call 1 from A -> Asterisk and from Asterisk -> B. "Time" represents the delta from the previos packet. Under normal conditions this should be close to 0.020 s (or 20 ms).

One example of the problem can be seen at line 1235 in 'forward-stream-first-call-after-asterisk.pcap.txt'. The packet there has been held for ~160 ms, then sent together with the next 7 packets all at once.

The RTP packets from the corresponding call leg (before asterisk) start at line 1244 in 'forward-stream-first-call-before-asterisk.pcap.txt" and are all equally spaced at about 20 ms.

There are many such examples - simply search for 0.000 (deltas which are less than 1 ms) to identify groups of packets that are sent together. The same problem is present in the backward stream too (not attached).



How to reproduce - add the following to the dialplan:

exten => _X.,n,Dial(SIP/B@3.3.3.3,,t)

The 't' option is important, because it effectively disables the Packet2Packet mode. Connect 2 calls (2 sets of telephones) and start dialing DTMFs as fast as you can on one of them (or simply generate 2 calls and send the DTMFs as I did). This will disrupt the call between the other set of phones if done fast enough.

I have tested this on 3 servers (2 physical and one virtual). All of them were running the same OS (Debian 6), so this may end up being an OS or res_timing_timerfd problem after all, but I really cannot test it on a different distribution.
I tested with the following versions and was able to reproduce the problem with all of them:

1.8.22.0
1.8.20.0
1.8.19.1
1.8.17.0
Comments:By: Michael L. Young (elguero) 2013-06-06 17:51:27.788-0500

If res_timing_timerfd is suspected to be causing an issue, can you try disabling it (noload=>res_timing_timerfd.so) and see if you have the same results?

You can disable it in the modules.conf file.  Asterisk should then be using res_timing_pthread.

By: hristo (hristo) 2013-06-07 05:50:13.200-0500

I don't think this is related to any timing module. I tested with all of the following by enabling just one at a time:

res_timing_timerfd.so
res_timing_dahdi.so
res_timing_pthread.so

I also disabled all of them and tested again. In all cases the problem is still present. I believe the res_timing_timerfd debug messages appear in the logs as a result of the problem, but are not causing it.

BTW, Asterisk shouldn't be using any timing module if it only forwards the packets back and forth between two other endpoints, should it?


By: Rusty Newton (rnewton) 2013-06-19 16:29:53.082-0500

I wasn't able to reproduce this on SVN-branch-1.8-r391778 or 11.4.0. I followed your guidance in the description, but no luck.  (or good luck?)  I can bring up two simple SIP to SIP calls, media through Asterisk, and on one call send across 40-60 DTMF digits (2833) with 50ms interval and 250ms duration and it doesn't block or slow any RTP on the other call. Also attempting something similar by hand, but of course with a higher interval since my fingers can't tap at 50ms intervals ends up with the same results.

Asterisk was using res_timing_timerfd.so, but I don't think Asterisk is using a timing interface in this case.

You'll have to provide more detail on how to reproduce (perhaps sip.conf config for the endpoints) and any other details on configuration that may be relevant.

If you are doing a lot of other things on this system then I would recommend building Asterisk from default configs on another system for a very simple test scenario and trying to reproduce it there. If you can't, then compare between the two and slowly add pieces to the test system until you find out what triggers it.

This is one we'll have to reproduce to move forward. I'll leave this open for a few weeks to see if you can provide additional detail that would help.

* If you do get a method for sure-fire reproduction in a clean asterisk install, then you may also want to send along full SIP/RTP pcaps that we can view with wireshark.

* Please also post your "uname -a" and "lsb_release -a" output as well. Below is what I tested with.

{noformat}
root@ubuntu:/etc/asterisk# uname -a
Linux ubuntu 3.2.0-45-generic #70-Ubuntu SMP Wed May 29 20:12:06 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
root@ubuntu:/etc/asterisk# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.04.2 LTS
Release: 12.04
Codename: precise
{noformat}

By: hristo (hristo) 2013-06-21 09:55:01.249-0500

At least with 1.8.22 I have made the tests by re-building asterisk from source. No changes, all the default settings/modules. Server CPU was at 1-2% before I started testing and came back to 1-2% as soon as the test was over.

I used the sample asterisk configuration (make samples), with only the following trivial changes:

Added two new SIP peers:
- the Call Generator
- the Terminating Endpoint

Set in sip.conf (because the call generator is behind NAT):
canreinvite=no
nat=yes

Added a single dial plan line:
exten => _X.,n,Dial(SIP/B@3.3.3.3,,t)

uname -a and lsb_release -a output as requested:
{code}
hristo.trendev@pbx02:~$ uname -a
Linux pbx02 2.6.32-5-amd64 #1 SMP Fri May 10 08:43:19 UTC 2013 x86_64 GNU/Linux
hristo.trendev@pbx02:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 6.0.7 (squeeze)
Release: 6.0.7
Codename: squeeze
{code}

I will try to setup a fresh Ubuntu 12.04 server on Monday, will retest and will report back.

By: hristo (hristo) 2013-06-24 11:06:23.324-0500

I had a fresh Ubuntu server installed inside a XEN VM (1 virtual CPU Xeon E5645@2.40). It is somewhat harder to reproduce with Ubuntu 12.04 and is not as pronounced as with Debian, but it is still noticeable. I had to setup 5 calls, which were sending DTMFs, before I could notice (hear) it.

What I have discovered is, that it only disrupts the audio, when asterisk is started with the "-p" option ("Run as pseudo-realtime thread"). If asterisk is not running in pseudo-realtime mode, then the CPU again goes to 100%, but it seems that the RTP packets are still sent out on time in this case.

Until now all tests, including the initial ones above, were made with "-p", but I haven't really noticed it, because it was being set in the init.d script. However, for the Ubuntu tests I used the sample init script ("make config") and only added the "-p" option to verify my findings, so I don't think that it is related to the init.d script itself.

The interesting part is that the CPU spends about 75-80% in the kernel (system). See attached "vmstat.txt". If we assume, that the newer kernels are better optimized, it will explain to some extent, why it is harder to reproduce with Ubuntu (3.2.xx kernel) than with Debian (2.6.xx kernel). Additionally, it also makes sense to assume, that when Asterisk runs in pseudo-realtime mode and needs 75-80% of kernel CPU time, this may prevent the kernel from completing other tasks on time - for example sending the RTP packets out, and therefore to result in the RTP delays that I see.

I am also attaching the diff against the sample configs (with IP and password edited). Please ignore the SIPAddHeader lines, they are only present to allow the terminating endpoint to accept the generated calls.

Can you at least confirm, that the CPU load gets close to 100% when the DTMFs start coming? It takes some seconds for the load to build up.

By: Rusty Newton (rnewton) 2013-06-25 13:50:27.090-0500

Thanks for all the additional info.

I tested both in -p mode and with out (no flags). In both cases I succeeded in getting Asterisk to consume 100% of a single cpu with a single call. To do reach that level of consumption I had to send a stream of DTMF with a ~25ms interval. Testing with a 50ms interval or above the CPU is barely hit a percent or two with a couple calls going just constantly streaming DTMF.

As you mentioned, if you are running in -p mode then bad things are likely to happen when you are heavy hitting the CPU with Asterisk.

I either case, regarding the CPU consumption, I think we may just be hitting a performance limitation within Asterisk for DTMF processing. I'll ping some of the devs to see what they make of it.

In the meantime:

1. Can you have your generators/devices send DTMFs slower?
2. Do you need to run Asterisk with -p?


By: Rusty Newton (rnewton) 2013-06-25 14:30:48.844-0500

After a little research, I believe sending DTMF to Asterisk (or any other telephony hardware/software) with under 40ms interval is something we really shouldn't expect to go well. reference: http://tools.ietf.org/html/rfc4733#section-3.1

Let us know if you can reproduce the overly high CPU consumption when sending DTMF with a interval of 50ms or above with a few channels.

{quote}
1. Can you have your generators/devices send DTMFs slower?
2. Do you need to run Asterisk with -p?
{quote}

By: hristo (hristo) 2013-06-26 11:33:07.988-0500

{quote}
1. Can you have your generators/devices send DTMFs slower?
{quote}
Certainly, as far as testing is concerned. However, I am able to reproduce this with a single phone (tested with SNOM 370, SNOM 320) and a single finger. The only reason I use a call generator is to give my fingers some time to rest. I don't think that I am able to hit the dialpad in 25ms intervals, but I will verify with a packet capture. My bigger problem is that this also happens during completely normal usage, which is why I originally opened the ticket.
Take for example a conference bridge service - the participants tend to connect all at the same time. Let's take only a single 20-participant conference, each participant having to dial a 6 digit PIN number and almost all usually trying to connect at about the time the conference starts. It is not unlikely that from the total of 120 DTMFs Asterisk will have to process some of them in bursts (possibly with 25ms intervals). Also, the CPU is probably under 10-20% during normal usage, when no DTMFs are hitting the server, so it will really be a waste of resource to do the capacity planning based on the DTMFs performance.
BTW, the conference bridge service runs on a dedicated system, so Asterisk is really only forwarding the RTP/Events stream in this case.

{quote}
2. Do you need to run Asterisk with -p
{quote}
I am considering removing it. I don't think it is really needed in this particular case, but it was set by someone else.It took me some time to discover it myself. This will hopefully resolve the RTP delay problem, only leaving the high CPU usage problem, which is worrisome by itself.

{quote}
Let us know if you can reproduce the overly high CPU consumption when sending DTMF with a interval of 50ms or above with a few channels.
{quote}
Will try to test this exact scenario tomorrow and will report back.

By: hristo (hristo) 2013-06-27 08:16:54.011-0500

I can confirm that by default the call generator sends 260ms long DTMFs with 40ms pauses between two DTMFs. That's roughly a rate of 3 DTMFs per second per call. Indeed, I can see in the captures that there is a 40ms pause between the last 'end' Event and the next Event packet. 40 ms is on the RFC boundary, but still within limits. All of the tests from my previous posts were made with the default setting of 40ms.

As for the possibility to trigger this from an end device - this is indeed possible. I can see that SNOM 370 for example sends the DTMFs with roughly 30ms pause between two digits. Moreover, the DTMFs seem to have a minimum enforced duration of 120 ms, which is about 8 DTMFs per second. Hitting a button 8 times per second is really not that hard.

I couldn't test with exactly 50ms DTMF pauses, because the G711 codec used in the test calls is set to 20ms samples and it seems that this also affects the pause between the DTMFs, which has to be in 20ms steps too. I can reconfigure the codec to use 10ms samples and I should then be able to test with 50ms inter DTMF pauses, but for now I did the tests with 60ms between DTMFs.

When I set the pause to 60ms I get mixed results:
* A single call no longer seems to trigger the problem.
* I am sometimes able to reproduce it with as little as 6 concurrent calls (all sending DTMFs with 60 ms pause and 240ms duration). The more concurrent calls I have, the easier it is to reproduce the problem. It seems that there is no magic number after which I can reproduce this with 100% success rate, but at 30 concurrent calls it happens almost every time.
* If I decrease the DTMF duration to say 80ms and keep the pause at 60 ms, then I am able to send more DTMFs per second per call and in this case I need less concurrent calls to trigger the probelm.
* The problem doesn't appear immediately after all the calls have started sending DTMFs. It takes some time into the test (sometime 1-2 seconds sometimes 5-6) before the CPU usage starts increasing, but once it starts it looks like it will almost always go all the way up to 100%, provided the test doesn't end in the meantime.
* Regardless of what I set for pause and duration, I was never able to load the server at say 60% or 80%. The problem is either there and the CPU goes to 100% or it is only slightly affected, leaving the CPU at around 10-20%, which I would consider normal. It looks like an all-or-nothing case.

I have made the tests both on the Ubuntu and the Debian servers and haven't noticed any major difference.

By: Rusty Newton (rnewton) 2013-07-09 13:48:59.217-0500

@hristo

So this took me a bit to narrow down.

I won't bore you with all the details of what I tried, but after trying to reproduce it some more and never getting the results you got; I double checked your sip.conf and tried to get as close to your config as possible.

After swapping various options in and out I found that I can reproduce the issue exactly as you describe by adding "rtpkeepalive=30" to my sip.conf. Without that option I don't see the issue.

I put calls across three Asterisk systems using an originate from the first, once connected through a Dial(SIP/C,,t) on the second system to the third system, the first would SendDTMF(bunchofdtmf,60,200) through to the third who was in a Wait().

The second system would have a CPU load increase about 14 seconds into the calls. The load would jump from a working load of about %1-%15 up to %100-%250 depending on how many calls I had running (5-30 calls). I even see the issue with a single call (jumps from %1 to about %15 load).

Please confirm if you can reproduce the issue after commenting out "rtpkeepalive=".

This looks like a bug and now that we can reproduce it I'll go ahead and put it in an open state.

* reproduced with SVN-branch-1.8-r391778

By: Rusty Newton (rnewton) 2013-07-09 14:22:01.567-0500

Just realized that *spike* is not the best word to use in my comment above.  Really the CPU usage jumps up and stays up the entire call.

By: Rusty Newton (rnewton) 2013-07-09 14:48:00.966-0500

Attached full log and pcap of the issue occurring on a single call. I don't see anything obvious in the log around 15 seconds in from the call.
* full.txt
* trafficdump.pcap

By: hristo (hristo) 2013-07-11 06:40:39.963-0500

I did a copy/paste of the peer and all its settings from a working configuration while truing to apply the minimum set of changes to the sample configs in order to reproduce the problem. That's how the rtpkeepalive setting ended up in my test configs.

I did another round of testing and at least for me the rtpkeepalive setting makes no difference. I see the exact same problem without the rtpkeepalive setting too. I tested both on Ubuntu and Debian by removing the setting from the peer and also verified via "sip show settings" that it's disabled globally.

BTW, not sure if it is at all related, but I can see that the "rtpkeepalive" RTP packets according to wireshark always have the PCMU codec (regardless of the call negotiated codec). Also, they seem to be sent alongside the "normal" RTP packets. I thought that the "rtpkeepalive" packets were supposed to be sent only when there is no "normal" RTP activity - for example during hold and no configured MOH, but I never expected to see them as extra packets in an active RTP stream (which really makes no sense). However, this is a completely different problem. I am only mentioning it here for completeness and just in case it happens to be somehow related to the current ticket.

By: Michael L. Young (elguero) 2013-07-11 07:18:50.065-0500

The issue you mention in the last paragraph about the PCMU codec payload has been fixed and will be in 1.8.23.0 (ASTERISK-21246).

By: Rusty Newton (rnewton) 2013-07-11 10:21:21.487-0500

@hristo

Let's make sure we are both testing with the same versions. Especially considering the fix on rtpkeepalive that Michael mentioned. Can you test with SVN-branch-1.8-r391778? There could be some other fixes that have changed behavior as well between the version you are using and what I tested with.

I removed and added rtpkeepalive about ten times (five times on, five times off), and with it off I cannot reproduce the issue, with it on I can reproduce the issue every time, even with a single call. I guess there is a rare chance that the issue just occurred every other time I tested, regardless of the rtpkeepalive setting. For accuracy sake I was applying it globally each time and not per peer, also always set to 30 , so rtpkeepalive=30.

By: hristo (hristo) 2013-07-12 09:46:41.118-0500

I just finished testing with SVN-branch-1.8-r391778 on the Ubuntu server. rtpkeepalive still doesn't seem to make any difference in my setup, only the pause between DTMFs is important. As soon as I set it to 40 ms, I start seeing the problem even with a single call.

Maybe setting rtpkeepalive simply triggers the problem faster, but is not the root cause for it? Keeping in mind that this is most probably a performance problem, I can imagine that the CPU/System also plays a role. Could it be, that the system on which I test, already has slow enough CPUs and doesn't need the rtpkeepalive set to on to experience the problem...just speculating?

I used a Citrix virtual server with a single core for the last set of tests:
{code}
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU           E5645  @ 2.40GHz
stepping : 2
microcode : 0x13
cpu MHz : 2394.056
cache size : 12288 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 48
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu de tsc msr pae cx8 sep cmov pat clflush mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc up rep_good nopl pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 popcnt aes hypervisor lahf_lm ida arat dtherm
bogomips : 4788.11
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
{code}

However, on a 4 core physical server with Debian 6 and 1.8.22.0 I get the same results. Unfortunately, I cannot test with the SVN version on the physical server, nor can I install Ubuntu on it.

By: Rusty Newton (rnewton) 2013-07-24 15:28:47.310-0500

{quote}
Maybe setting rtpkeepalive simply triggers the problem faster, but is not the root cause for it? Keeping in mind that this is most probably a performance problem, I can imagine that the CPU/System also plays a role. Could it be, that the system on which I test, already has slow enough CPUs and doesn't need the rtpkeepalive set to on to experience the problem...just speculating?
{quote}

In my case I couldn't reproduce the issue without rtpkeepalive unless I set the DTMF interval and duration under the spec we discussed.

If I tested with an interval and duration under spec than I could get a similar CPU usage issue to happen immediately even with a single call.

I couldn't reproduce it with an interval and duration within spec unless I set rtpkeepalive. Without it set I couldn't reproduce it even while streaming DTMF constantly on every channel at the interval and duration you specified with over 30+ simultaneous calls for over a minute. Even at that CPU usage would barely hit %10, sometimes up to %15.  

With rtpkeepalive on I always got the CPU usage spike at least 15-20 seconds into the call, regardless of how many calls were running.

I don't have any further time to dedicate to triaging this single issue. At least there are two possible ways to reproduce it documented here and the issue is open for developers to look at. Thanks for the report.

By: Modulus (modulus) 2013-11-05 13:56:24.176-0600

We can reproduce this on a KVM-based virtual machine running asterisk with the following characteristics:

||Architecture| x86_64 (KVM)|
||Number of virtual CPUs| 6|
||Distribution| Debian Wheezy 7.2|
||Asterisk| 10.12.1|
||Kernel| 3.2.0-4-amd64 (debian package version), 3.2.51 (upstream release)|
||CPUs on host machine| 2 x 8-core AMD Opteron(tm) Processor 4284|

To reproduce this automatically we use another asterisk server to originate a call to the target asterisk machine and send DTMFs using the following command:

{code}CLI> originate Local/2********@originate extension s@senddtmf{code}

with the following dialplan context:
{code}[senddtmf]
exten => s,1,Wait(10)
exten => s,n,SendDTMF(824718273487126345871263486128374619872364897632984716238947,25,100)
exten => s,n,Goto(s,1){code}

With timeouts of 25ms up to 40ms between DTMF tones (40ms being the minimum according to RFC4733) the problem is automatically reproduced all the time, with one CPU going at 100% (more CPUs can go 100% if there are more than one calls sending DTMF tones concurrently). Although it cannot be reproduced with a SendDTMF() timeout of 45ms or above, it is quite easy to reproduce it manually by just fingering digits fast enough and this is very important because it reveals that the problem can be triggered by normal use. It should be also noted that it takes just one call to reproduce the problem and cause one CPU to go 100%. rtpkeepalive is not set.

Running the [perf|https://perf.wiki.kernel.org] tool we get the following distribution of CPU cycles inside the asterisk process:
{code}Events: 12K cpu-clock, DSO: asterisk            
+  30.47%  asterisk  ast_dummy_channel_destructor
+  18.92%  asterisk  ast_channel_destructor      
+  12.79%  asterisk  ast_dsp_process            
+  11.60%  asterisk  ast_party_redirecting_set_init  
+   5.45%  asterisk  ast_set_hangupsource            
+   2.78%  asterisk  generator_force                  
+   2.62%  asterisk  manager_state_cb                
+   2.21%  asterisk  ast_get_enum                    
+   1.88%  asterisk  0x4024f                          
+   1.68%  asterisk  ast_party_redirecting_copy      
+   1.44%  asterisk  action_originate                
+   1.32%  asterisk  ast_sendtext                    
+   1.12%  asterisk  action_aocmessage                
+   1.02%  asterisk  ast_senddigit_begin              
+   0.99%  asterisk  msg_send_exec                    
+   0.94%  asterisk  ast_settimeout                  
+   0.59%  asterisk  ast_read_generator_actions      
+   0.31%  asterisk  ast_register_application2        
+   0.22%  asterisk  __ast_manager_event_multichan    
+   0.21%  asterisk  pthread_mutex_lock@plt          
+   0.20%  asterisk  __init_manager                  
+   0.19%  asterisk  ast_channel_clear_softhangup    
+   0.19%  asterisk  action_coreshowchannels          
+   0.14%  asterisk  generic_thread_loop              
+   0.12%  asterisk  parse_naptr                      
+   0.07%  asterisk  private_enum_init                
+   0.06%  asterisk  read@plt                        
+   0.06%  asterisk  data_search_generate            
+   0.05%  asterisk  ast_register_switch              
+   0.04%  asterisk  pthread_self@plt                
+   0.04%  asterisk  ast_bridge_depart                
+   0.03%  asterisk  pthread_rwlock_unlock@plt        
+   0.02%  asterisk  fcntl@plt                        
+   0.02%  asterisk  ebl_callback                    
+   0.02%  asterisk  ast_stun_request                
+   0.02%  asterisk  write@plt                        
+   0.02%  asterisk  devstate_event                  
+   0.02%  asterisk  ast_dsp_init                    
+   0.02%  asterisk  msg_tech_hash                    
+   0.02%  asterisk  pbx_builtin_execiftime          
+   0.02%  asterisk  ast_pbx_outgoing_app            
+   0.02%  asterisk  ast_get_srv                      
+   0.01%  asterisk  poll@plt                        
+   0.01%  asterisk  gettimeofday@plt                
+   0.01%  asterisk  ast_parse_arg                    
+   0.01%  asterisk  ast_dsp_silence_noise_with_energy
+   0.01%  asterisk  __ast_dsp_call_progress          
+   0.01%  asterisk  ast_dsp_set_features            
+   0.01%  asterisk  ast_destroy_timing              
+   0.01%  asterisk  ast_say_date_with_format_it      
+   0.01%  asterisk  ssl_lock                        
+   0.01%  asterisk  tmcomp                          
+   0.01%  asterisk  _history_expand_command{code}


By: Modulus (modulus) 2013-12-02 12:11:24.352-0600

This bug also exists on *Asterisk 11*. Particularly, we have reproduced this the same way as above on version *11.5.1* of Asterisk provided by the debian package from the wheezy-backports repository.

Has any developer had a look on this issue?

By: Modulus (modulus) 2013-12-10 09:10:24.313-0600

Could this be a regression of issue ASTERISK-16804?