Lolminer 1 46

Comment

Author: Admin | 2025-04-28

-1 EAGAIN (Resource temporarily unavailable)3883 18:46:54.611737 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3883 18:46:54.614339 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3883 18:46:54.616936 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3883 18:46:54.619524 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3883 18:46:54.622107 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3883 18:46:54.624682 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3883 18:46:54.627258 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3883 18:46:54.629833 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3886 18:46:54.632166 recvmsg(51, {msg_name(0)=NULL, msg_iov(1)=[{"{"id":4,"jsonrpc":"2.0","result":true}\n", 1024}], msg_controllen=0, msg_flags=0}, 0) = 393886 18:46:54.632284 recvmsg(51, 0x7f2b6b472240, 0) = -1 EAGAIN (Resource temporarily unavailable)3883 18:46:54.632409 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3883 18:46:54.634988 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)[cut 360 identical rows in less than 1 second]3883 18:46:55.493760 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3883 18:46:55.496347 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3883 18:46:55.498934 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3886 18:46:55.499474 recvmsg(51, {msg_name(0)=NULL, msg_iov(1)=[{"{"id":6,"jsonrpc":"2.0","method":"mining.notify","params":["cbaa1a4d2d3b1e86484e17319a97320d4f469d0aefbe9c8b463b6923adff7f18","cbaa1a4d2d3b1e86484e17319a97320d4f469d0aefbe9c8b463b6923adff7f18","ad15d04b13b18ecbb6bc1c05cefa1e952fe584f2c79fb5f3dbc48656704b0f95","0000000112e0be826d694b2e62d01511f12a6061fbaec8bc02357593e70e52ba",false]}\n", 1024}], msg_controllen=0, msg_flags=0}, 0) = 3353886 18:46:55.499645 recvmsg(51, 0x7f2b6b472240, 0) = -1 EAGAIN (Resource temporarily unavailable)3883 18:46:55.501514 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3883 18:46:55.504098 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)[cut 3530 identical rows in less than 10 seconds]3883 18:47:04.656530 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3883 18:47:04.659108 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3886 18:47:04.661301 recvmsg(51, {msg_name(0)=NULL, msg_iov(1)=[{"{"id":6,"jsonrpc":"2.0","method":"mining.notify","params":["06cf9c1a96c15709e160c4c8b3af8c43fc5d19da938a77f41b7d88d37a4b37c6","06cf9c1a96c15709e160c4c8b3af8c43fc5d19da938a77f41b7d88d37a4b37c6","ad15d04b13b18ecbb6bc1c05cefa1e952fe584f2c79fb5f3dbc48656704b0f95","0000000112e0be826d694b2e62d01511f12a6061fbaec8bc02357593e70e52ba",false]}\n", 1024}], msg_controllen=0, msg_flags=0}, 0) = 3353886 18:47:04.661457 recvmsg(51, 0x7f2b6b472240, 0) = -1 EAGAIN (Resource temporarily unavailable)3883 18:47:04.661687 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3883 18:47:04.664266 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3883 18:47:04.666849 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)3883 18:47:04.669435 accept(50, 0x7f2b6cc75ec0, 0x7f2b6cc75eac) = -1 EAGAIN (Resource temporarily unavailable)Apparently Boost::asio is hammering with polls the socket in search for data and I have the suspect that this behavior may lead to cases like #936 where weak CPUs may record high usage and eventually cause segfaults on nic driver.I think (but I might be well wrong) async_read_until (as it's a combined operation of multiple async_read_some) keeps polling io_service=>nic in search of small chuncks of data causing exit with error EAGAIN continuosly preventing locking.Probably keeping the listening socket on a separate lockable and sync thread would mitigate this issue.Any help or hint appreciated.

Add Comment