I wrote server and client programs in C and ran them on my Beaglebone platform with different Linux versions (I compiled it). However, the performance of the old Linux version(2.6) is better than the new Linux version(4.9).
So I need to do some profiling to find the time cost of functions, etc.
//server snippet
while(1){
if(read(new_socket, buf, sizeof(int)) <= 0)
break;
// timecost start
if(!init){
gettimeofday(&begin, NULL);
gettimeofday(&start, NULL);
init = 1;
}
c++;
if (c / 1000000 != print){
gettimeofday(&end, NULL);
printf("%d (%d) tot: %ld, last: %ld\n", print, count,
get_usec(begin, end), get_usec(start, end));
print = c / 1000000;
gettimeofday(&start, NULL);
}
// timecost end
count++;
send(new_socket, buf, sizeof(int), 0);
}
//client snippet
while (1)
{
send(client_fd, buf, sizeof(int), 0);
if (read(client_fd, buf, sizeof(int)) <= 0)
break;
// timecost start
if(!init){
gettimeofday(&begin, NULL);
gettimeofday(&start, NULL);
init = 1;
}
c++;
if (c / 1000000 != print){
gettimeofday(&end, NULL);
printf("%d (%d) tot: %ld, last: %ld\n", print, count,
get_usec(begin, end), get_usec(start, end));
print = c / 1000000;
gettimeofday(&start, NULL);
}
// timecost end
count++;
}
the result I got on old/new linux kernels,
2.x kernel:
0 (1999999) tot: 4609998, last: 4609984
1 (3999999) tot: 9150170, last: 4540120
4.x kernel:
0 (1999999) tot: 9451743, last: 9451729
1 (3999999) tot: 18892372, last: 9440623
Are there any tools that can be used to debug/profile to find the system call usage/ kernel time cost?
I did a perf record/report on the server process, every function costs more CPU on old kernel.
old:
39.54% server [kernel.kallsyms] [k] finish_task_switch
5.02% server [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
2.59% server [kernel.kallsyms] [k] nf_hook_slow
new:
24.74% server [kernel.kallsyms] [k] finish_task_switch
2.67% server [kernel.kallsyms] [k] nf_hook_slow