golang spanner test library crashes after a while - google-cloud-spanner

Spanner GO library crashes after few mins perhaps after this query (although this has been successful earlier)
Version cloud.google.com/go/spanner v1.11.0
2021/02/01 00:45:32.564971 spannertest.inmem: Querying: SELECT * FROM tenant_config WHERE commit_time > "2021-02-01T00:44:32Z"
Crashinfo
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0xae3bcb]
goroutine 214 [running]:
cloud.google.com/go/spanner/spannertest.(*server).ExecuteSql(0xc00009b4a0, 0xf74c00, 0xc0001e8270, 0xc0003e8c60, 0xc00009b4a0, 0xc0001e8270, 0xc0008a4ba0)
/Users/mpathak/Development/gopkgs/pkg/mod/cloud.google.com/go/spanner#v1.11.0/spannertest/inmem.go:491 +0x3b
google.golang.org/genproto/googleapis/spanner/v1._Spanner_ExecuteSql_Handler(0xd0f8a0, 0xc00009b4a0, 0xf74c00, 0xc0001e8270, 0xc00088b020, 0x0, 0xf74c00, 0xc0001e8270, 0xc0004f4060, 0x14)
/Users/mpathak/Development/gopkgs/pkg/mod/google.golang.org/genproto#v0.0.0-20201019141844-1ed22bb0c154/googleapis/spanner/v1/spanner.pb.go:3581 +0x217
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0003a1500, 0xf7e800, 0xc00018a900, 0xc00089a000, 0xc0001ac2a0, 0x152c7d8, 0x0, 0x0, 0x0)
/Users/mpathak/Development/gopkgs/pkg/mod/google.golang.org/grpc#v1.32.0/server.go:1194 +0x50a
google.golang.org/grpc.(*Server).handleStream(0xc0003a1500, 0xf7e800, 0xc00018a900, 0xc00089a000, 0x0)
/Users/mpathak/Development/gopkgs/pkg/mod/google.golang.org/grpc#v1.32.0/server.go:1517 +0xcfd
google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000606140, 0xc0003a1500, 0xf7e800, 0xc00018a900, 0xc00089a000)
/Users/mpathak/Development/gopkgs/pkg/mod/google.golang.org/grpc#v1.32.0/server.go:859 +0xa1
created by google.golang.org/grpc.(*Server).serveStreams.func1
/Users/mpathak/Development/gopkgs/pkg/mod/google.golang.org/grpc#v1.32.0/server.go:857 +0x204

This seems to be caused by a bug in the ExecuteSql method implementation of spannertest. The session pool of the Spanner client will execute a ping statement every 50 minutes to keep sessions alive on the backend. These SELECT 1 statements are executed without a transaction, which means that the backend should default to a single-use read-only transaction. The inmem server of spannertest assumes that the client will always specify a TransactionSelector: https://github.com/googleapis/google-cloud-go/blob/c7ecf0f3f454606b124e52d20af2545b2c68646f/spanner/spannertest/inmem.go#L491
I've opened an issue for it here: https://github.com/googleapis/google-cloud-go/issues/3639

Related

XMonad not giving space for Tint2 bar

I am currently setting up my new config for XMonad but I cannot get the docks to work.
https://hastebin.com/eqoliyodat.swift
The tint properties look like this.
WM_STATE(WM_STATE):
window state: Normal
icon window: 0x0
WM_NORMAL_HINTS(WM_SIZE_HINTS):
program specified location: 1559250944, -874843230
program specified minimum size: 127 by 42
program specified maximum size: 127 by 42
WM_CLASS(STRING) = "tint2", "Tint2"
XdndAware(ATOM) = ATOM
_MOTIF_WM_HINTS(_MOTIF_WM_HINTS) = 0x2, 0x0, 0x0, 0x0, 0x0
WM_HINTS(WM_HINTS):
Client accepts input or input focus: False
Initial state is Don't Care State.
window id # to use for icon: 0xe00003
window id # of group leader: 0xe00003
_NET_WM_STATE(ATOM) = _NET_WM_STATE_SKIP_PAGER, _NET_WM_STATE_SKIP_TASKBAR, _NET_WM_STATE_STICKY, _NET_WM_STATE_ABOVE
_NET_WM_DESKTOP(CARDINAL) = 4294967295
_NET_WM_WINDOW_TYPE(ATOM) = _NET_WM_WINDOW_TYPE_DOCK
_NET_WM_PID(CARDINAL) = 11648
_NET_WM_ICON_NAME(UTF8_STRING) = "tint2"
_NET_WM_NAME(UTF8_STRING) = "tint2"
WM_ICON_NAME(STRING) = "tint2"
WM_NAME(STRING) = "tint2"
I have no idea where this could be coming from, can somebody help me?
I was looking through the current threads, but didnt seem to find any other solution to my problem. I cant use the normal gaps, because I have multiple monitors but a bar only on one screen.
Best regards.

Azure CLI VHD upload failing at 0.5 %

so, I have been trying for hours now to upload a VHD to Azure.
I downloaded a VHD by exporting on one Azure tenant on another domain, now I'm trying to upload it on another Azure account to attach it to a VM.
Steps (based on this article):
Exported VHD on tenant 1 (T1)
Logged into Azure CLI on PC with tenant 2 (T2)
Created a disk through Azure CLI T2 with upload bytes parameter set (30GB - 32213303808 bytes)
Granted access to disk on T2
Started upload with AzCopy.exe copy "D:\Downloads\disk.vhd "https://[sasurl]" --blob-type PageBlob
As soon as the upload starts, it gets stuck on 0.5 %, after about 55 minutes it just spits out that the upload has failed.
Log file:
2021/02/12 20:08:27 0.5 %, 0 Done, 0 Failed, 1 Pending, 0 Skipped, 1 Total, 2-sec Throughput (Mb/s): 14.1551
2021/02/12 20:08:28 INFO: [P#0-T#0] Page blob throughput tuner: Target Mbps 4048
2021/02/12 20:08:29 INFO: [P#0-T#0] Page blob throughput tuner: Target Mbps 4054
2021/02/12 20:08:29 PERF: primary performance constraint is Unknown. States: X: 0, O: 0, M: 0, L: 0, R: 1, D: 0, W: 480, F: 0, B: 96, E: 0, T: 577, GRs: 96
2021/02/12 20:08:29 0.5 %, 0 Done, 0 Failed, 1 Pending, 0 Skipped, 1 Total, 2-sec Throughput (Mb/s): 11.1397
2021/02/12 20:08:30 INFO: [P#0-T#0] Page blob throughput tuner: Target Mbps 4060
2021/02/12 20:08:31 INFO: [P#0-T#0] Page blob throughput tuner: Target Mbps 4066
2021/02/12 20:08:31 ==> REQUEST/RESPONSE (Try=1/11.1957784s[SLOW >3s], OpTime=11.1957784s) -- REQUEST ERROR
PUT [redacted SAS url]
Content-Length: [4194304]
User-Agent: [AzCopy/10.8.0 Azure-Storage/0.10 (go1.13; Windows_NT)]
X-Ms-Client-Request-Id: [redacted]
X-Ms-Page-Write: [update]
X-Ms-Range: [bytes=398458880-402653183]
X-Ms-Version: [2019-12-12]
--------------------------------------------------------------------------------
ERROR:
-> github.com/Azure/azure-pipeline-go/pipeline.NewError, /home/vsts/go/pkg/mod/github.com/!azure/azure-pipeline-go#v0.2.3/pipeline/error.go:157
HTTP request failed
Put [redacted SAS url]: net/http: TLS handshake timeout
goroutine 214 [running]:
github.com/Azure/azure-storage-azcopy/ste.stack(0xc092e4ea20, 0xc04e193300, 0x0)
/home/vsts/work/1/s/ste/xferLogPolicy.go:232 +0xa4
github.com/Azure/azure-storage-azcopy/ste.NewRequestLogPolicyFactory.func1.1(0xdb74a0, 0xc000653560, 0xc001189300, 0x10, 0x3, 0x0, 0xc00168fa20)
/home/vsts/work/1/s/ste/xferLogPolicy.go:146 +0x7ac
github.com/Azure/azure-pipeline-go/pipeline.PolicyFunc.Do(0xc00175c640, 0xdb74a0, 0xc000653560, 0xc001189300, 0xc00168faf0, 0x2030004, 0xa, 0x2030005)
/home/vsts/go/pkg/mod/github.com/!azure/azure-pipeline-go#v0.2.3/pipeline/core.go:43 +0x4b
github.com/Azure/azure-storage-azcopy/ste.NewVersionPolicyFactory.func1.1(0xdb74a0, 0xc000653560, 0xc001189300, 0xc0010cac40, 0xc0010cabf0, 0x40c7c0, 0xc00175c6e0)
/home/vsts/work/1/s/ste/mgr-JobPartMgr.go:78 +0x1b8
github.com/Azure/azure-pipeline-go/pipeline.PolicyFunc.Do(0xc00126c2e0, 0xdb74a0, 0xc000653560, 0xc001189300, 0x10, 0x10, 0xb939e0, 0xa)
/home/vsts/go/pkg/mod/github.com/!azure/azure-pipeline-go#v0.2.3/pipeline/core.go:43 +0x4b
github.com/Azure/azure-storage-blob-go/azblob.responderPolicy.Do(0xda72e0, 0xc00126c2e0, 0xc0011b0820, 0xdb74a0, 0xc000653560, 0xc001189300, 0x0, 0x12e2360, 0xc0010cac50, 0x4310d8)
/home/vsts/go/pkg/mod/github.com/!azure/azure-storage-blob-go#v0.10.1-0.20201022074806-8d8fc11be726/azblob/zz_generated_responder_policy.go:33 +0x5a
github.com/Azure/azure-storage-blob-go/azblob.anonymousCredentialPolicy.Do(...)
/home/vsts/go/pkg/mod/github.com/!azure/azure-storage-blob-go#v0.10.1-0.20201022074806-8d8fc11be726/azblob/zc_credential_anonymous.go:54
github.com/Azure/azure-storage-azcopy/ste.(*retryNotificationPolicy).Do(0xc001234560, 0xdb74a0, 0xc000653560, 0xc001189300, 0x0, 0x0, 0x0, 0xc0010cad70)
/home/vsts/work/1/s/ste/xferRetryNotificationPolicy.go:59 +0x62
github.com/Azure/azure-pipeline-go/pipeline.PolicyFunc.Do(0xc0012345c0, 0xdb74a0, 0xc000653560, 0xc001189300, 0xc000653560, 0xc001234dd0, 0xc000000001, 0x0)
/home/vsts/go/pkg/mod/github.com/!azure/azure-pipeline-go#v0.2.3/pipeline/core.go:43 +0x4b
github.com/Azure/azure-storage-azcopy/ste.NewBlobXferRetryPolicyFactory.func1.1(0xdb74e0, 0xc0001c7440, 0xc001189200, 0x10, 0xb411a0, 0x64492d747301, 0xc00168f340)
/home/vsts/work/1/s/ste/xferRetrypolicy.go:384 +0x70f
github.com/Azure/azure-pipeline-go/pipeline.PolicyFunc.Do(0xc00175c690, 0xdb74e0, 0xc0001c7440, 0xc001189200, 0xc00168f440, 0x30, 0x28, 0x8)
/home/vsts/go/pkg/mod/github.com/!azure/azure-pipeline-go#v0.2.3/pipeline/core.go:43 +0x4b
github.com/Azure/azure-storage-blob-go/azblob.NewUniqueRequestIDPolicyFactory.func1.1(0xdb74e0, 0xc0001c7440, 0xc001189200, 0x10, 0xb411a0, 0x40d001, 0xc00168f340)
/home/vsts/go/pkg/mod/github.com/!azure/azure-storage-blob-go#v0.10.1-0.20201022074806-8d8fc11be726/azblob/zc_policy_unique_request_id.go:19 +0xb5
github.com/Azure/azure-pipeline-go/pipeline.PolicyFunc.Do(0xc00126c320, 0xdb74e0, 0xc0001c7440, 0xc001189200, 0xc00168f428, 0x35, 0xc0007d03c0, 0xc0010cb138)
/home/vsts/go/pkg/mod/github.com/!azure/azure-pipeline-go#v0.2.3/pipeline/core.go:43 +0x4b
github.com/Azure/azure-storage-blob-go/azblob.NewTelemetryPolicyFactory.func1.1(0xdb74e0, 0xc0001c7440, 0xc001189200, 0x1, 0x0, 0x1, 0xc0017100a0)
/home/vsts/go/pkg/mod/github.com/!azure/azure-storage-blob-go#v0.10.1-0.20201022074806-8d8fc11be726/azblob/zc_policy_telemetry.go:34 +0x164
github.com/Azure/azure-pipeline-go/pipeline.PolicyFunc.Do(0xc0001c74d0, 0xdb74e0, 0xc0001c7440, 0xc001189200, 0xc0001c74d0, 0xc001189200, 0xc0010cb208, 0x40d06f)
/home/vsts/go/pkg/mod/github.com/!azure/azure-pipeline-go#v0.2.3/pipeline/core.go:43 +0x4b
github.com/Azure/azure-pipeline-go/pipeline.(*pipeline).Do(0xc0007ce040, 0xdb74e0, 0xc0001c7440, 0xda73e0, 0xc0011b0820, 0xc001189200, 0x30, 0xc000000638, 0x12, 0x0)
/home/vsts/go/pkg/mod/github.com/!azure/azure-pipeline-go#v0.2.3/pipeline/core.go:129 +0x88
github.com/Azure/azure-storage-blob-go/azblob.pageBlobClient.UploadPages(0xc000000600, 0x5, 0x0, 0x0, 0x0, 0xc000000608, 0x30, 0xc000000638, 0x12, 0x0, ...)
/home/vsts/go/pkg/mod/github.com/!azure/azure-storage-blob-go#v0.10.1-0.20201022074806-8d8fc11be726/azblob/zz_generated_page_blob.go:805 +0x5b0
github.com/Azure/azure-storage-blob-go/azblob.PageBlobURL.UploadPages(0xc000000600, 0x5, 0x0, 0x0, 0x0, 0xc000000608, 0x30, 0xc000000638, 0x12, 0x0, ...)
/home/vsts/go/pkg/mod/github.com/!azure/azure-storage-blob-go#v0.10.1-0.20201022074806-8d8fc11be726/azblob/url_page_blob.go:87 +0x336
github.com/Azure/azure-storage-azcopy/ste.(*pageBlobUploader).GenerateUploadFunc.func1()
/home/vsts/work/1/s/ste/sender-pageBlobFromLocal.go:96 +0x8f3
github.com/Azure/azure-storage-azcopy/ste.createChunkFunc.func1(0x3c)
/home/vsts/work/1/s/ste/sender.go:179 +0x1ae
github.com/Azure/azure-storage-azcopy/ste.(*jobsAdmin).chunkProcessor(0xc000054000, 0x3c)
/home/vsts/work/1/s/ste/JobsAdmin.go:433 +0xe6
created by github.com/Azure/azure-storage-azcopy/ste.(*jobsAdmin).poolSizer
/home/vsts/work/1/s/ste/JobsAdmin.go:362 +0x682
2021/02/12 20:08:31 ==> REQUEST/RESPONSE (Try=1/11.1957784s[SLOW >3s], OpTime=11.1957784s) -- REQUEST ERROR
PUT [redacted SAS url]
Content-Length: [4194304]
User-Agent: [AzCopy/10.8.0 Azure-Storage/0.10 (go1.13; Windows_NT)]
X-Ms-Client-Request-Id: [redacted]
X-Ms-Page-Write: [update]
X-Ms-Range: [bytes=394264576-398458879]
X-Ms-Version: [2019-12-12]
--------------------------------------------------------------------------------
ERROR:
-> github.com/Azure/azure-pipeline-go/pipeline.NewError, /home/vsts/go/pkg/mod/github.com/!azure/azure-pipeline-go#v0.2.3/pipeline/error.go:157
HTTP request failed
And it just goes on like this for thousands of lines, at the end of the log there's this:
2021/02/12 19:56:56 INFO: [P#0-T#0] Page blob throughput tuner: Target Mbps 98705
2021/02/12 19:56:57 ==> REQUEST/RESPONSE (Try=14/2m0.3843703s[SLOW >3s], OpTime=38m15.4275233s) -- RESPONSE STATUS CODE ERROR
PUT https://md-impexp-vpwbn0cr4mj2.z18.blob.storage.azure.net/3ntgq1cq2rsj/abcd?comp=page&si=a01312d5-4da8-41d1-acc6-7466f702a447&sig=-REDACTED-&sr=b&sv=2018-03-28&timeout=901
Content-Length: [4194304]
User-Agent: [AzCopy/10.8.0 Azure-Storage/0.10 (go1.13; Windows_NT)]
X-Ms-Client-Request-Id: [36e705e8-9f5d-45a1-4574-7e994292da82]
X-Ms-Page-Write: [update]
X-Ms-Range: [bytes=398458880-402653183]
X-Ms-Version: [2019-12-12]
--------------------------------------------------------------------------------
RESPONSE Status: 500 Operation could not be completed within the specified time.
Content-Length: [246]
Content-Type: [application/xml]
Date: [Fri, 12 Feb 2021 19:56:56 GMT]
Server: [Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0]
X-Ms-Client-Request-Id: [36e705e8-9f5d-45a1-4574-7e994292da82]
X-Ms-Error-Code: [OperationTimedOut]
X-Ms-Request-Id: [f91ed41a-c01e-0008-4f78-011552000000]
X-Ms-Version: [2019-12-12]
Response Details: <?xml version="1.0" encoding="utf-8"?> <Error><Code>OperationTimedOut</Code><Message>Operation could not be completed within the specified time. </Message>
So what am I doing wrong? Tried both CLI and PowerShell.
I stumbled upon a good solution myself:
Since I was uploading a VHD of a disk already on Azure, instead of using the downloaded VHD I used the original VHD export URL as the source of the disk.

Driver is unable to get I/O DMA Adapter

My driver for a PCI Bus-Master Device without Scatter/Gather capability calls IoGetDmaAdapter(), but fails with 0xFFFFFFFFC0000005 Access Violation. This causes a BSOD.
Here's how I set it up:
RtlZeroMemory(&deviceDescription, sizeof(DEVICE_DESCRIPTION));
deviceDescription.Master = TRUE; // this is a bus-master device without scatter/gather ability
deviceDescription.Dma32BitAddresses = TRUE; // this device is unable to perform 64-bit addressing
deviceDescription.InterfaceType = InterfaceTypeUndefined;
KdBreakPoint();
deviceDescription.Version = DEVICE_DESCRIPTION_VERSION2;
IoGetDmaAdapter(deviceObject, &deviceDescription, &fakeRegs);
Here's my Windows kernel debugging session:
MyDriver!AllocateHardWareResource+0x313:
fffff803`319626a3 488b8424e8000000 mov rax,qword ptr [rsp+0E8h]
MyDriver!AllocateHardWareResource+0x324:
fffff803`319626b4 488d442478 lea rax,[rsp+78h]
MyDriver!AllocateHardWareResource+0x34d:
fffff803`319626dd 8b442450 mov eax,dword ptr [rsp+50h]
MyDriver!AllocateHardWareResource+0x358:
fffff803`319626e8 c684248900000001 mov byte ptr [rsp+89h],1
MyDriver!AllocateHardWareResource+0x360:
fffff803`319626f0 c784248000000002000000 mov dword ptr [rsp+80h],2
MyDriver!AllocateHardWareResource+0x36b:
fffff803`319626fb 4c8d44244c lea r8,[rsp+4Ch]
KDTARGET: Refreshing KD connection
KDTARGET: Refreshing KD connection
*** Fatal System Error: 0x0000007e
(0xFFFFFFFFC0000005,0x0000000000000000,0xFFFF9400DE25D4B8,0xFFFF9400DE25CCF0)
WARNING: This break is not a step/trace completion.
The last command has been cleared to prevent
accidental continuation of this unrelated event.
Check the event, location and thread before resuming.
Break instruction exception - code 80000003 (first chance)
A fatal system error has occurred.
Before the crash at Guard_Dispatch_iCall_NOP, I see the following call-stack:
HalpGetCacheCoherency + 6D
HalGetAdapterV2 + A8
IoGetDmaAdapter + C0
IoGetDmaAdapter + C0
IoGetDmaAdapter + C0
My Call-Site
I checked that Physical Device Object has the same address as originally provided to my AddDevice handler.
How should I ask politely to avoid a "Sorry Dave, I can't do that" from Windows Kernel I/O Manager?
When my driver calls into IoGetDmaAdapter(), the driver receives two interface queries via IRP_MN_QUERY_INTERFACE: GUID_BUS_INTERFACE_STANDARD and GUID_DMA_CACHE_COHERENCY_INTERFACE.
GUID_DMA_CACHE_COHERENCY_INTERFACE is new to Windows 10 or Server 2016.
GUID_DMA_CACHE_COHERENCY_INTERFACE query should normally pass to the next driver in the stack. I was making a mistake, setting the status to success, but should have left it alone.

Panic in Golang http.Client with high concurrent excecutions

I am creating a system which is a http server in golang that will perform several request to another API based in every request that come to it.
e.g
curl localhost:8080/users?ids=1,2,3,4
will perform several concurrent gets to:
api.com/user/1
api.com/user/2
api.com/user/3
api.com/user/4
I am having a problem, the http.Client is getting my a panic, when it has a heavy concurrent requests (if I hit localhost:8080/users?ids=1,2,3,4.....40 with AB with 4 concurrent, or hitting refresh in my browser)
The proglem appears to be with the sentence (line 159)
resp, _ := client.Do(req)
My code is here (Not so large... 180 lines):
http://play.golang.org/p/olibNz2n1Z
The panic error is this one:
goroutine 5 [select]:
net/http.(*persistConn).roundTrip(0xc210058f80, 0xc21000a720, 0xc210058f80, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/transport.go:879 +0x6d6
net/http.(*Transport).RoundTrip(0xc210058280, 0xc21005b1a0, 0x1, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/transport.go:187 +0x391
net/http.send(0xc21005b1a0, 0x590290, 0xc210058280, 0x0, 0x0, ...)
/usr/local/go/src/pkg/net/http/client.go:168 +0x37f
net/http.(*Client).send(0xc21001e960, 0xc21005b1a0, 0x28, 0xc21001ec30, 0xc21005f570)
/usr/local/go/src/pkg/net/http/client.go:100 +0xd9
net/http.(*Client).doFollowingRedirects(0xc21001e960, 0xc21005b1a0, 0x2ab298, 0x0, 0x0, ...)
/usr/local/go/src/pkg/net/http/client.go:294 +0x671
net/http.(*Client).Do(0xc21001e960, 0xc21005b1a0, 0xa, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/client.go:129 +0x8f
main.buscarRecurso(0xc21000a650, 0xb, 0xc2100526c0)
/Users/fscasserra/Documents/workspace/Luna/multiget-api/multiget.go:159 +0x131
created by main.obtenerRecursos
/Users/fscasserra/Documents/workspace/Luna/multiget-api/multiget.go:106 +0x197
Can anyone help me?
Best regards,
Fer
I will put money on the panic coming from calling Close() on a nil resp.Body.
Always check your errors!
In general, if a function returns a value and an error, the response value may not be usable in the case of a non-nil error. Any exceptions to this should be well documented.

linux softlockup in memory

My system is a embedded linux system(running kernel version 2.6.18). A client process send data to mysql server. The data will be stored in mysql database at a RAID5 assembled by four disks. The IO pressure(wa%) is always above 20% , mysql CPU utilization is very high.
After running 5 or 6 hours, the system run into softlock up stat.
The stack information is about releasing the physical memory, writing cache data to the hard disk.
Any suggestions in this circumstance?**
BUG: soft lockup detected on CPU#0!
[<c043dc1c>] softlockup_tick+0x8f/0xb1
[<c0428cb5>] update_process_times+0x26/0x5c
[<c0411256>] smp_apic_timer_interrupt+0x5d/0x67
[<c04044e7>] apic_timer_interrupt+0x1f/0x24
[<c06fe0b9>] _spin_lock+0x5/0xf
[<c047db2a>] __mark_inode_dirty+0x50/0x176
[<c0424eef>] current_fs_time+0x4d/0x5e
[<c0475ccd>] touch_atime+0x51/0x94
[<c0440926>] do_generic_mapping_read+0x425/0x563
[<c044134b>] __generic_file_aio_read+0xf3/0x267
[<c043fcd0>] file_read_actor+0x0/0xd4
[<c04414fb>] generic_file_aio_read+0x3c/0x4d
[<c045d72d>] do_sync_read+0xc1/0xfd
[<c0431656>] autoremove_wake_function+0x0/0x37
[<c045e0e8>] vfs_read+0xa4/0x167
[<c045d66c>] do_sync_read+0x0/0xfd
[<c045e688>] sys_pread64+0x5e/0x62
[<c0403a27>] syscall_call+0x7/0xb
=======================
BUG: soft lockup detected on CPU#2!
[<c043dc1c>] softlockup_tick+0x8f/0xb1
[<c0428cb5>] update_process_times+0x26/0x5c
[<c0411256>] smp_apic_timer_interrupt+0x5d/0x67
[<c04044e7>] apic_timer_interrupt+0x1f/0x24
[<c06fe0bb>] _spin_lock+0x7/0xf
[<c04aaf17>] journal_try_to_free_buffers+0xf4/0x17b
[<c0442c52>] find_get_pages+0x28/0x5d
[<c049c4b1>] ext3_releasepage+0x0/0x7d
[<c045f0bf>] try_to_release_page+0x2c/0x46
[<c0447894>] invalidate_mapping_pages+0xc9/0x167
[<c04813b0>] drop_pagecache+0x86/0xd2
[<c048144e>] drop_caches_sysctl_handler+0x52/0x64
[<c04813fc>] drop_caches_sysctl_handler+0x0/0x64
[<c042623d>] do_rw_proc+0xe8/0xf4
[<c0426268>] proc_writesys+0x1f/0x24
[<c045df81>] vfs_write+0xa6/0x169
[<c0426249>] proc_writesys+0x0/0x24
[<c045e601>] sys_write+0x41/0x6a
[<c0403a27>] syscall_call+0x7/0xb
=======================
BUG: soft lockup detected on CPU#1!
[<c043dc1c>] softlockup_tick+0x8f/0xb1
[<c0428cb5>] update_process_times+0x26/0x5c
[<c0411256>] smp_apic_timer_interrupt+0x5d/0x67
[<c04044e7>] apic_timer_interrupt+0x1f/0x24
[<c06f007b>] inet_diag_dump+0x804/0x821
[<c06fe0bb>] _spin_lock+0x7/0xf
[<c047db2a>] __mark_inode_dirty+0x50/0x176
[<c043168d>] wake_bit_function+0x0/0x3c
[<c04ae0f6>] __journal_remove_journal_head+0xee/0x1a5
[<c0445ae8>] __set_page_dirty_nobuffers+0x87/0xc6
[<c04a908e>] __journal_unfile_buffer+0x8/0x11
[<c04ab94d>] journal_commit_transaction+0x8e0/0x1103
[<c0431656>] autoremove_wake_function+0x0/0x37
[<c04af690>] kjournald+0xa9/0x1e5
[<c0431656>] autoremove_wake_function+0x0/0x37
[<c04af5e7>] kjournald+0x0/0x1e5
[<c04314da>] kthread+0xde/0xe2
[<c04313fc>] kthread+0x0/0xe2
[<c0404763>] kernel_thread_helper+0x7/0x14
=======================
BUG: soft lockup detected on CPU#3!
[<c043dc1c>] softlockup_tick+0x8f/0xb1
[<c0428cb5>] update_process_times+0x26/0x5c
[<c0411256>] smp_apic_timer_interrupt+0x5d/0x67
[<c04044e7>] apic_timer_interrupt+0x1f/0x24
[<c06fe0bb>] _spin_lock+0x7/0xf
[<c047db2a>] __mark_inode_dirty+0x50/0x176
[<c0424eef>] current_fs_time+0x4d/0x5e
[<c0475ccd>] touch_atime+0x51/0x94
[<c0440926>] do_generic_mapping_read+0x425/0x563
[<c044134b>] __generic_file_aio_read+0xf3/0x267
[<c043fcd0>] file_read_actor+0x0/0xd4
[<c04414fb>] generic_file_aio_read+0x3c/0x4d
[<c045d72d>] do_sync_read+0xc1/0xfd
[<c0431656>] autoremove_wake_function+0x0/0x37
[<c045e0e8>] vfs_read+0xa4/0x167
[<c045d66c>] do_sync_read+0x0/0xfd
[<c045e688>] sys_pread64+0x5e/0x62
[<c0403a27>] syscall_call+0x7/0xb
=======================
Seriously, try something newer. 2.6.18 is >7 years old.
Looks like CPU#1 and CPU#3 are spinning in a spinlock on a inode structure.

Resources