Here's a work problem I've been attacking diligently for about a week now without success.
I have legacy Java code that uses the apache commons net FTPClient library to send a file to a mainframe. The code works perfectly for files under 86Mb. However, when it encounters a file over 86Mb, it fails with a CopyStreamException (with no other useful information). I've added timeouts and a CopyStreamListener to no avail. The listener outputs some messages indicating that the ftp is successfully uploading buffers of data to the mainframe until it gets to 86Mb (sample log output is included below).
I originally thought the problem was network/firewall related (which is why you see so many timeout manipulations in the code), but now I understand it has to do with the space allocation commands I am sending to the mainframe. I have gotten assistance from a mainframe expert here and he has made NUMEROUS suggestions as to different combinations of BLKSIZE, SPACE, LRECL, etc. But none of them have worked. I did get the transfer to work to completion one time, however the mainframe guy then informed me that the blocksize and format parameters of the file created on the mainframe were incorrect, so I had to scrap that (I'll explain what it was that worked in this case down below).
First, here's the Java code:
public static boolean copyFileThruFTP(
String srcFileName, String remoteFileName, Properties exportProps) {
boolean success = true;
String serverUserName = exportProps.getProperty(WebUtil.FTP_SERVER_USER);
String serverPwd = exportProps.getProperty(WebUtil.FTP_SERVER_PWD);
String serverIp = exportProps.getProperty(WebUtil.FTP_SERVER_IP);
File f = new File(srcFileName);
FTPClient ftpClient = null;
try {
String errorMessage = null;
FileInputStream input = new FileInputStream(f);
ftpClient = new FTPClient();
ftpClient.setDataTimeout(6000000); // 100 minutes
ftpClient.setConnectTimeout(6000000); // 100 minutes
ftpClient.connect(serverIp);
int reply = ftpClient.getReplyCode();
if (!FTPReply.isPositiveCompletion(reply)) {
errorMessage = "FTP server refused connection: " + ftpClient.getReplyString();
} else {
if (!ftpClient.login(serverUserName,serverPwd)) {
errorMessage = "Failed to copy file thru FTP: login failed.";
} else {
ftpClient.setBufferSize(1024000);
ftpClient.setControlKeepAliveTimeout(30); // sends a keepalive every thirty seconds
if (ftpClient.deleteFile(remoteFileName)) {
log.warn("Deleted existing file from remote server: " + remoteFileName);
}
ftpClient.setFileTransferMode(FTP.ASCII_FILE_TYPE);
ftpClient.setFileType(FTP.ASCII_FILE_TYPE);
ftpClient.sendSiteCommand("RECFM=FB");
ftpClient.sendSiteCommand("LRECL=2000");
ftpClient.sendSiteCommand("BLKSIZE=27000");
//ftpClient.sendSiteCommand("DCB=(RECFM=FB,LRECL=2000,BLKSIZE=26000)");
ftpClient.sendSiteCommand("SPACE=(TRK,(500,500),RLSE)");
OutputStream ftpOut = ftpClient.storeFileStream(remoteFileName);
if (ftpOut == null) {
errorMessage = "FTP server could not open file for write: " + ftpClient.getReplyString();
} else {
OutputStream output = new BufferedOutputStream(ftpOut);
log.warn("copyFileThruFTP calling copyStream() for file: " + f.getAbsolutePath());
Util.copyStream(input, output, ftpClient.getBufferSize(), f.length(),
new CopyStreamAdapter() {
public void bytesTransferred(
long totalBytesTransferred, int bytesTransferred, long streamSize) {
log.warn(bytesTransferred + " bytes written; total: " + totalBytesTransferred + " of " + streamSize);
}
});
input.close();
output.close();
if (!ftpClient.completePendingCommand()) {
errorMessage = "Failed to copy file thru FTP: completePendingCommand failed.";
}
}
}
ftpClient.logout();
ftpClient.disconnect();
ftpClient = null;
}
if (!StringUtils.isEmpty(errorMessage)) {
log.error(errorMessage);
System.out.print(errorMessage);
success = false;
}
} catch (CopyStreamException cse){
cse.printStackTrace();
log.error("Failed to copy file thru FTP (CopyStreamException).", cse);
success = false;
} catch (IOException e){
e.printStackTrace();
log.error("Failed to copy file thru FTP (IOException).", e);
success = false;
} finally {
try {
if (ftpClient != null) {
ftpClient.logout();
ftpClient.disconnect();
}
} catch (IOException ignore) {}
}
return success;
}
Now, as I've said, this code works exceptionally well for all files under 86Mb, so while it may be useful from a knowledge point of view, I don't really need tips on Java coding style, etc. Also note that in posting this method, I removed comments and extraneous code, so there may be a syntax error or two, although I didn't see any when I copied this back into eclipse. What I'm trying to resolve is why this code works for small files, but not for big files!
Here is a sample of the log output for the large file (modified slightly for aesthetics):
2012-02-29 11:13 WARN - copyFileThruFTP calling copyStream() for file: C:\data\mergedcdi\T0090200.txt
2012-02-29 11:13 WARN - 1024000 bytes; total: 1024000 of 96580484
2012-02-29 11:13 WARN - 1024000 bytes; total: 2048000 of 96580484
2012-02-29 11:13 WARN - 1024000 bytes; total: 3072000 of 96580484
2012-02-29 11:13 WARN - 1024000 bytes; total: 4096000 of 96580484
2012-02-29 11:13 WARN - 1024000 bytes; total: 5120000 of 96580484
2012-02-29 11:13 WARN - 1024000 bytes; total: 6144000 of 96580484
2012-02-29 11:13 WARN - 1024000 bytes; total: 7168000 of 96580484
2012-02-29 11:13 WARN - 1024000 bytes; total: 8192000 of 96580484
2012-02-29 11:13 WARN - 1024000 bytes; total: 9216000 of 96580484
2012-02-29 11:13 WARN - 1024000 bytes; total: 10240000 of 96580484
2012-02-29 11:13 WARN - 1024000 bytes; total: 11264000 of 96580484
2012-02-29 11:13 WARN - 1024000 bytes; total: 12288000 of 96580484
2012-02-29 11:13 WARN - 1024000 bytes; total: 13312000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 14336000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 15360000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 16384000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 17408000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 18432000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 19456000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 20480000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 21504000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 22528000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 23552000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 24576000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 25600000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 26624000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 27648000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 28672000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 29696000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 30720000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 31744000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 32768000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 33792000 of 96580484
2012-02-29 11:14 WARN - 1024000 bytes; total: 34816000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 35840000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 36864000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 37888000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 38912000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 39936000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 40960000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 41984000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 43008000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 44032000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 45056000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 46080000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 47104000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 48128000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 49152000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 50176000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 51200000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 52224000 of 96580484
2012-02-29 11:15 WARN - 1024000 bytes; total: 53248000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 54272000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 55296000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 56320000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 57344000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 58368000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 59392000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 60416000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 61440000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 62464000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 63488000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 64512000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 65536000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 66560000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 67584000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 68608000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 69632000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 70656000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 71680000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 72704000 of 96580484
2012-02-29 11:16 WARN - 1024000 bytes; total: 73728000 of 96580484
2012-02-29 11:17 WARN - 1024000 bytes; total: 74752000 of 96580484
2012-02-29 11:17 WARN - 1024000 bytes; total: 75776000 of 96580484
2012-02-29 11:17 WARN - 1024000 bytes; total: 76800000 of 96580484
2012-02-29 11:17 WARN - 1024000 bytes; total: 77824000 of 96580484
2012-02-29 11:17 WARN - 1024000 bytes; total: 78848000 of 96580484
2012-02-29 11:17 WARN - 1024000 bytes; total: 79872000 of 96580484
2012-02-29 11:17 WARN - 1024000 bytes; total: 80896000 of 96580484
2012-02-29 11:17 WARN - 1024000 bytes; total: 81920000 of 96580484
2012-02-29 11:17 WARN - 1024000 bytes; total: 82944000 of 96580484
2012-02-29 11:17 WARN - 1024000 bytes; total: 83968000 of 96580484
2012-02-29 11:17 WARN - 1024000 bytes; total: 84992000 of 96580484
2012-02-29 11:17 WARN - 1024000 bytes; total: 86016000 of 96580484
2012-02-29 11:17 ERROR- Failed to copy file thru FTP (CopyStreamException).
org.apache.commons.net.io.CopyStreamException: IOException caught while copying.
at org.apache.commons.net.io.Util.copyStream(Util.java:130)
at org.apache.commons.net.io.Util.copyStream(Util.java:174)
at com.pa.rollupedit.util.WebUtil.copyFileThruFTP(WebUtil.java:1120)
at com.pa.rollupedit.util.CdiBuilder.process(CdiBuilder.java:361)
at com.pa.rollupedit.controller.ExportCDI.doGet(ExportCDI.java:55)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at com.pa.rollupedit.controller.LoginFilter.doFilter(LoginFilter.java:90)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at com.pa.rollupedit.util.RequestTimeFilter.doFilter(RequestTimeFilter.java:18)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at weblogic.security.service.SecurityManager.runAs(Unknown Source)
at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
Ok, so now to what happens on the mainframe. Here is the information the mainframe guy provided me with regards to the file generated by the incomplete transmission:
Data Set Name . . . . : BSFP.ICEDCDI.SPC10.T0090200
General Data Current Allocation
Management class . . : MCDFLT Allocated tracks . : 1,650
Storage class . . . : SCSTD Allocated extents . : 32
Volume serial . . . : SMS217 +
Device type . . . . : 3390
Data class . . . . . : DCDFLT
Organization . . . : PS Current Utilization
Record format . . . : FB Used tracks . . . . : 1,650
Record length . . . : 2000 Used extents . . . : 32
Block size . . . . : 26000
1st extent tracks . : 100
Secondary tracks . : 50 Dates
Data set name type : Creation date . . . : 2012/02/29
SMS Compressible. . : NO Referenced date . . : 2012/02/29
Expiration date . . : None
Note that the Record Length, Record Format, and Block Size seem to be as you'd expect them to be based on the site commands I sent. However, the 1st extent and Secondary tracks look to me to be incorrect. The mainframe guy has given me about 10 different SPACE commands to try (along with some modification to the Block size). The record size is definitely 2000 characters so that's stayed consistent. But so far, none of his suggestions have worked.
On my own, I discovered a different way of setting the parameters and tried that at one point. You can see this in the code as the commented out line:
//ftpClient.sendSiteCommand("DCB=(RECFM=FB,LRECL=2000,BLKSIZE=26000)");
What's interesting about this is that when I used this command (and commented out the RECFM/LRECL/BLKSIZE commands), the ftp transmission was successful!! BUT the mainframe guy then shared the following remote file information, indicating that the various parameters were not set correctly. It would appear that the DCB command didn't work at all.
Data Set Name . . . . : BSFP.ICEDCDI.SPC10.T0090200
General Data Current Allocation
Management class . . : MCDFLT Allocated tracks . : 230
Storage class . . . : SCSTD Allocated extents . : 4
Volume serial . . . : SMS195
Device type . . . . : 3390
Data class . . . . . : DCDFLT
Organization . . . : PS Current Utilization
Record format . . . : VB Used tracks . . . . : 230
Record length . . . : 256 Used extents . . . : 4
Block size . . . . : 27000
1st extent tracks . : 100
Secondary tracks . : 50 Dates
Data set name type : Creation date . . . : 2012/02/28
SMS Compressible. . : NO Referenced date . . : 2012/02/29
Expiration date . . : None
Needless to say, that really put a damper on things.
An update from this morning: the mainframe guy reached out to other mainframe experts, one of whom told him that "TRK" was incorrect in the SPACE command, and instead to use the following:
ftpClient.sendSiteCommand("SPACE=(TRA,(PRI=500,SEC=500))");
I tried this (along with other variations, such as without the PRI= and SEC=) but the results are exactly the same (i.e. fails at 86Mb).
As you can see, this is not a frivolous question. I'm not even delving into the many gyrations I went through in verifying that this was not a network issue. At this point, I am about 98.6% sure that this problem is being caused by the mainframe site commands.
If you can provide any assistance I would most greatly appreciate it!!
Since you're failing at a certain size, it's your mainframe space command.
A mainframe file can only have 16 extents, 1 primary allocation and 15 secondary allocations. If the disk pack is full or nearly full, you might not get your primary allocation.
In your current allocation, you're asking for 500 primary tracks and 15 * 500 secondary tracks, for a total of 8,000 tracks. A track will hold either 2 or 3 blocks of 26,000 bytes, depending on the disk drive track size. If you specify a block size of 0, the IBM operating system will calculate the most efficient block size.
A bit of math indicates you've allocated for 208,000 records (worst case).
You probably should specify your space in cylinders, and have a small primary and larger secondary. Something like:
ftpClient.sendSiteCommand("SPACE=(CYL,(30,300),RLSE)");
If your mainframe shop insists on you specifying tracks, try this:
ftpClient.sendSiteCommand("SPACE=(TRK,(450,4500),RLSE)");
I made both numbers divisible by 15, because if I recall correctly, a cylinder contains 15 tracks.
Edited to add:
I just found the IBM FTP commands page on the IBM web site. It appears you have to send each part of the space command separately, just like the DCB command.
ftpClient.sendSiteCommand("TR");
ftpClient.sendSiteCommand("PRI=450");
ftpClient.sendSiteCommand("SEC=4500");
I would like to add a few comments to the excellent answer given by Gilbert Le Blanc
The reason for specifying a small primary and large secondary space is a little counter intuitive.
Primary space allocations always request contiguous space (i.e. in one big
chunk). The job will fail with a B37 return code when a volume (disk) cannot be found with that amount
of contiguous space on it. In situations where the primary space allocation represents a significant
percentage of a volume (3390 DASD devices are actually fairly small - on the order of 10Gb or so),
the probability of not finding a volume with the required space can be significant. Hence the job
blows up on a B37.
Sencondary allocations do not have to be allocated contiguous blocks and may span multiple volumes. By
requesting a fairly small primary quantity you avoid the B37 abend by allowing the dataset to
allocate secondary space to fulfill the total space requirement. The catch with this approach
is that you may only get 15 secondary extents per volume (this is an artificial but historical limit that can be
lifted by the DASD management in your shop). To avoid blowing the extent limit, causing an E37 abend, you
request a large extent size.
Finally, you can specify an FTP VCOUNT parameter that can set the number of volumes a dataset
may span (each volume can have up to 15 extents). This can make for absolutely huge datasets.
All this to point out that understanding traditional dataset allocation on an IBM mainframe involves
a bit of witch-craft.
Life can be made a lot simpler by relying on Storage Management System (SMS) to do the figuring for you.
Most shops will have SMS in place. Instead of having to do all the math
to calculate primary/secondary extent sizes, you can probably just ask for the appropriate SMS DATACLAS. Most installations
will have something appropriate for large data sets such as yours. Each installation defines its own
DATACLS names so your mainframe guys will have to dig them up for you. The SMS DATACLAS is specified
using the FTP LOCSITE option: DATAC=xxx where xxx is the appropriate DATACLAS name.
Related
I'm using the Tabulate package to print data in table format. The output is sent to a webpage. While using the default font everything is working fine. However upon changing font family (Outfit from Google fonts or cursive e.g), they stop being aligned. Are there any possible solutions?
Output with default font:
Strength: 16 Dmg: 50 Armor: 3.8 ShadowRes: 3.5%
Agility: 34 Spell: 183 FireRes: 5.1% NatureRes: 6.1%
Intellect: 61 Critical: 3.4% FrostRes: 6.3% ArcaneRes: 3.8%
Output with Google font (looks like this can't really show it because SO font is the default):
Strength: 25 Dmg: 45 Armor: 3.1 ShadowRes: 3.2%
Agility: 20 Spell: 132 FireRes: 3.3% NatureRes: 3.6%
Intellect: 44 Critical: 2.0% FrostRes: 3.6% ArcaneRes: 3.8%
Thanks in advance!
you need a monospace font in order to keep the good size of space
I have developed an ASP.NET Core 2.0 App with Angular 4 that I have tried to update to Angular 5.
To do the update I have followed this tutorial, Upgrade Angular 4 app to Angular 5 with Visual Studio 2017.
But when I run the app using node I get the following message:
info: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[0]
User profile is available. Using 'C:\Users\Uic18.IC.000\AppData\Local\ASP.
NET\DataProtection-Keys' as key repository and Windows DPAPI to encrypt keys at
rest.
Hosting environment: Development
Content root path: D:\Desarrollo\MyApp\AngularWebApplic
ation\AngularWebApplication
Now listening on: http://localhost:50356
Application started. Press Ctrl+C to shut down.
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:50356/
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[1]
Executing action method AngularWebApplication.Controllers.HomeController.I
ndex (AngularWebApplication) with arguments ((null)) - ModelState is Valid
info: Microsoft.AspNetCore.Mvc.ViewFeatures.Internal.ViewResultExecutor[1]
Executing ViewResult, running view at path /Views/Home/Index.cshtml.
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[2]
Executed action AngularWebApplication.Controllers.HomeController.Index (An
gularWebApplication) in 3819.9801ms
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
Request finished in 4181.4943ms 200 text/html; charset=utf-8
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:50356/dist/main-client.js?v
=8SjbzGJ0XAUj8-zoVkv-_b_lOUJstsV6wmYaawCZjd4
info: Microsoft.AspNetCore.NodeServices[0]
webpack built 87cdc734d642ca666961 in 10958ms
fail: Microsoft.AspNetCore.NodeServices[0]
Hash: 87cdc734d642ca666961
Version: webpack 3.11.0
Time: 10958ms
Asset Size Chunks Chunk Names
main-client.js 4.12 MB 0 [emitted] [big] main-client
main-client.js.map 5.07 MB 0 [emitted] main-client
[0] ./node_modules/#angular/core/esm5/core.js 628 kB {0} [built]
[79] multi event-source-polyfill webpack-hot-middleware/client?path=__we
bpack_hmr&dynamicPublicPath=true ./ClientApp/boot.browser.ts 52 bytes {0} [built
]
[80] ./node_modules/event-source-polyfill/src/eventsource.js 21.4 kB {0}
[built]
[81] (webpack)-hot-middleware/client.js?path=__webpack_hmr&dynamicPublic
Path=true 7.35 kB {0} [built]
[82] (webpack)/buildin/module.js 517 bytes {0} [built]
[83] ./node_modules/querystring-es3/index.js 127 bytes {0} [built]
[86] ./node_modules/strip-ansi/index.js 161 bytes {0} [built]
[88] (webpack)-hot-middleware/client-overlay.js 2.21 kB {0} [built]
[93] (webpack)-hot-middleware/process-update.js 4.33 kB {0} [built]
[94] ./ClientApp/boot.browser.ts 1.01 kB {0} [built]
[95] ./ClientApp/polyfills.ts 2.54 kB {0} [built]
[148] ./node_modules/reflect-metadata/Reflect.js 52.3 kB {0} [built]
[150] ./node_modules/bootstrap/dist/js/bootstrap.js 115 kB {0} [built]
[156] ./node_modules/#angular/platform-browser-dynamic/esm5/platform-brow
ser-dynamic.js 24.8 kB {0} [built]
[158] ./ClientApp/app/app.module.browser.ts 1.49 kB {0} [built]
+ 246 hidden modules
WARNING in ./node_modules/#angular/core/esm5/core.js
6558:15-36 Critical dependency: the request of a dependency is an expressi
on
# ./node_modules/#angular/core/esm5/core.js
# ./ClientApp/boot.browser.ts
# multi event-source-polyfill webpack-hot-middleware/client?path=__webpac
k_hmr&dynamicPublicPath=true ./ClientApp/boot.browser.ts
WARNING in ./node_modules/#angular/core/esm5/core.js
6578:15-102 Critical dependency: the request of a dependency is an express
ion
# ./node_modules/#angular/core/esm5/core.js
# ./ClientApp/boot.browser.ts
# multi event-source-polyfill webpack-hot-middleware/client?path=__webpac
k_hmr&dynamicPublicPath=true ./ClientApp/boot.browser.ts
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
Request finished in 4074.9028ms 200 application/javascript; charset=UTF-8
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:50356/dist/__webpack_hmr
And I only see in the browser a text saying: Loading...
There is a message fail: Microsoft.AspNetCore.NodeServices[0] but nothing else.
If I rollback the change in Views/Home/Index.cshtml file from <app>Loading...</app> to <app asp-prerender-module="ClientApp/dist/main-server">Loading...</app> I get this another error:
NodeInvocationException: No provider for PlatformRef!
Error: No provider for PlatformRef!
at Error (native)
To solve it I have followed this SO Answer without success.
Any idea about how to solve this problem?
I am encountering the messages below after creating an UBIFS filesystem in
a new UBI volume on an existing UBI partition.
This happens while reading all the files on the UBI fs via tar -cf /dev/null *:
UBIFS error (pid 1810): ubifs_read_node: bad node type (255 but expected 1)
UBIFS error (pid 1810): ubifs_read_node: bad node at LEB 33:967120, LEB mapping status 0
Not a node, first 24 bytes:
00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
CPU: 0 PID: 1810 Comm: tar Tainted: P O 3.18.80 #3
Backtrace:
[<8001b664>] (dump_backtrace) from [<8001b880>] (show_stack+0x18/0x1c)
r7:00000000 r6:00000000 r5:80000013 r4:80566cbc
[<8001b868>] (show_stack) from [<801d77e0>] (dump_stack+0x88/0xa4)
[<801d7758>] (dump_stack) from [<80171608>] (ubifs_read_node+0x1fc/0x2b8)
r7:00000021 r6:00000712 r5:000ec1d0 r4:bd59d000
[<8017140c>] (ubifs_read_node) from [<8018b654>](ubifs_tnc_read_node+0x88/0x124)
r10:00000000 r9:b4f9fdb0 r8:bd59d264 r7:00000001 r6:b71e6000 r5:bd59d000
r4:b6126148
[<8018b5cc>] (ubifs_tnc_read_node) from [<80174690>](ubifs_tnc_locate+0x108/0x1e0)
r7:00000001 r6:b71e6000 r5:00000001 r4:bd59d000
[<80174588>] (ubifs_tnc_locate) from [<80168200>] (do_readpage+0x1c0/0x39c)
r10:bd59d000 r9:000005b1 r8:00000000 r7:bd258710 r6:a5914000 r5:b71e6000
r4:be09e280
[<80168040>] (do_readpage) from [<80169398>] (ubifs_readpage+0x44/0x424)
r10:00000000 r9:00000000 r8:bd59d000 r7:be09e280 r6:00000000 r5:bd258710
r4:00000000
[<80169354>] (ubifs_readpage) from [<80089654>](generic_file_read_iter+0x48c/0x5d8)
r10:00000000 r9:00000000 r8:00000000 r7:be09e280 r6:bd2587d4 r5:bd5379e0
r4:00000000
[<800891c8>] (generic_file_read_iter) from [<800bbd60>](new_sync_read+0x84/0xa8)
r10:00000000 r9:b4f9e000 r8:80008e24 r7:bd6437a0 r6:bd5379e0 r5:b4f9ff80
r4:00001000
[<800bbcdc>] (new_sync_read) from [<800bc85c>] (__vfs_read+0x20/0x54)
r7:b4f9ff80 r6:7ec2d800 r5:00001000 r4:800bbcdc
[<800bc83c>] (__vfs_read) from [<800bc91c>] (vfs_read+0x8c/0xf4)
r5:00001000 r4:bd5379e0
[<800bc890>] (vfs_read) from [<800bc9cc>] (SyS_read+0x48/0x80)
r9:b4f9e000 r8:80008e24 r7:00001000 r6:7ec2d800 r5:bd5379e0 r4:bd5379e0
[<800bc984>] (SyS_read) from [<80008c80>] (ret_fast_syscall+0x0/0x3c)
r7:00000003 r6:7ec2d800 r5:00000008 r4:00082a08
UBIFS error (pid 1811): do_readpage: cannot read page 0 of inode 1457, error -22
Mounting looks like this:
UBI-0: ubi_attach_mtd_dev:default fastmap pool size: 190
UBI-0: ubi_attach_mtd_dev:default fastmap WL pool size: 25
UBI-0: ubi_attach_mtd_dev:attaching mtd3 to ubi0
UBI-0: scan_all:scanning is finished
UBI-0: ubi_attach_mtd_dev:attached mtd3 (name "data", size 3824 MiB)
UBI-0: ubi_attach_mtd_dev:PEB size: 1048576 bytes (1024 KiB), LEB size: 1040384 bytes
UBI-0: ubi_attach_mtd_dev:min./max. I/O unit sizes: 4096/4096, sub-page size 4096
UBI-0: ubi_attach_mtd_dev:VID header offset: 4096 (aligned 4096), data offset: 8192
UBI-0: ubi_attach_mtd_dev:good PEBs: 3816, bad PEBs: 8, corrupted PEBs: 0
UBI-0: ubi_attach_mtd_dev:user volume: 6, internal volumes: 1, max. volumes count: 128
UBI-0: ubi_attach_mtd_dev:max/mean erase counter: 4/2, WL threshold: 4096, image sequence number: 1362948729
UBI-0: ubi_attach_mtd_dev:available PEBs: 2313, total reserved PEBs: 1503, PEBs reserved for bad PEB handling: 72
UBI-0: ubi_thread:background thread "ubi_bgt0d" started, PID 419
UBIFS: mounted UBI device 0, volume 6, name "slot1", R/O mode
UBIFS: LEB size: 1040384 bytes (1016 KiB), min./max. I/O unit sizes: 4096 bytes/4096 bytes
UBIFS: FS size: 259055616 bytes (247 MiB, 249 LEBs), journal size 12484608 bytes (11 MiB, 12 LEBs)
UBIFS: reserved for root: 4952683 bytes (4836 KiB)
UBIFS: media format: w4/r0 (latest is w4/r0), UUID 92C7B251-2666-4717-B735-5539900FE749, small LPT model
The rate of reproduction is one in about 20 sequences of: ubirmvol,
ubimkvol (256MB), unpack a rootfs.tgz (20MB, 40MB unpacked) in the
mounted UBIFS, umount, ubidetach, sync, reboot. There are no xattrs
created. No power cuts happen during the tests.
The board runs a 3.18.80 kernel on an ARM board. I tested other
identical test boards where only the NAND chip manufacturer was
different.
I had CONFIG_MTD_UBI_FASTMAP enabled when the error occured, but I
used no explicit fastmap boot cmdline option. I tried mounting the
ubifs with a rebuilt kernel that had the config option disabled after
the issue reproduced, but the read error still occured.
I also tried mounting with a kernel that had "ubifs: Fix journal
replay w.r.t. xattr nodes" patch [1] applied, after reading older similar error reports, but once reproduced on a NAND, the read error
persisted.
[1] https://patchwork.ozlabs.org/patch/713213/
Is there some special requirement except just umount, ubidetach and
sync before running reboot, or regarding free space fixup ? Would
creating the fs image offline with mkfs.ubifs instead of unpacking it
via tar make a difference here ? Would it help to test with
cherry picked commits from linux v4.14 that touch
fs/ubifs ?
I have files with columns like this. This sample input below is partial.
Please check below for main file link. Each file will have only two rows.
Gene 0.4% 0.7% 1.1% 1.4% 1.8% 2.2% 2.5% 2.9% 3.3% 3.6% 4.0% 4.3% 4.7% 5.1% 5.4% 5.8% 6.2% 6.5% 6.9% 7.2% 7.6% 8.0% 8.3% 8.7% 9.1% 9.4% 9.8% 10.1% 10.5% 10.9% 11.2% 11.6% 12.0% 12.3% 12.7% 13.0% 13.4% 13.8% 14.1% 14.5% 14.9% 15.2% 15.6% 15.9% 16.3% 16.7% 17.0% 17.4% 17.8% 18.1% 18.5% 18.8% 19.2% 19.6% 19.9% 20.3% 20.7% 21.0% 21.4% 21.7% 22.1% 22.5% 22.8% 23.2% 23.6% 23.9% 24.3% 24.6% 25.0% 25.4% 25.7% 26.1% 26.4% 26.8% 27.2% 27.5% 27.9% 28.3% 28.6% 29.0% 29.3% 29.7% 30.1% 30.4% 30.8% 31.2% 31.5% 31.9% 32.2% 32.6% 33.0% 33.3% 33.7% 34.1% 34.4% 34.8% 35.1% 35.5% 35.9% 36.2% 36.6% 37.0% 37.3% 37.7% 38.0% 38.4% 38.8% 39.1% 39.5% 39.9% 40.2% 40.6% 40.9% 41.3% 41.7% 42.0% 42.4% 42.8% 43.1% 43.5% 43.8% 44.2% 44.6% 44.9% 45.3% 45.7% 46.0% 46.4% 46.7% 47.1% 47.5% 47.8% 48.2% 48.6% 48.9% 49.3% 49.6% 50.0% 50.4% 50.7% 51.1% 51.4% 51.8% 52.2% 52.5% 52.9% 53.3% 53.6% 54.0% 54.3% 54.7% 55.1% 55.4% 55.8% 56.2% 56.5% 56.9% 57.2% 57.6% 58.0% 58.3% 58.7% 59.1% 59.4% 59.8% 60.1% 60.5% 60.9% 61.2% 61.6% 62.0% 62.3% 62.7% 63.0% 63.4% 63.8% 64.1% 64.5% 64.9% 65.2% 65.6% 65.9% 66.3% 66.7% 67.0% 67.4% 67.8% 68.1% 68.5% 68.8% 69.2% 69.6% 69.9% 70.3% 70.7% 71.0% 71.4% 71.7% 72.1% 72.5% 72.8% 73.2% 73.6% 73.9% 74.3% 74.6% 75.0% 75.4% 75.7% 76.1% 76.4% 76.8% 77.2% 77.5% 77.9% 78.3% 78.6% 79.0% 79.3% 79.7% 80.1% 80.4% 80.8% 81.2% 81.5% 81.9% 82.2% 82.6% 83.0% 83.3% 83.7% 84.1% 84.4% 84.8% 85.1% 85.5% 85.9% 86.2% 86.6% 87.0% 87.3% 87.7% 88.0% 88.4% 88.8% 89.1% 89.5% 89.9% 90.2% 90.6% 90.9% 91.3% 91.7% 92.0% 92.4% 92.8% 93.1% 93.5% 93.8% 94.2% 94.6% 94.9% 95.3% 95.7% 96.0% 96.4% 96.7% 97.1% 97.5% 97.8% 98.2% 98.6% 98.9% 99.3% 99.6% 100.0% 0.4% 0.7% 1.1% 1.4% 1.8% 2.2% 2.5% 2.9% 3.3% 3.6% 4.0% 4.3% 4.7% 5.1% 5.4% 5.8% 6.2% 6.5% 6.9% 7.2% 7.6% 8.0% 8.3% 8.7% 9.1% 9.4% 9.8% 10.1% 10.5% 10.9% 11.2% 11.6% 12.0% 12.3% 12.7% 13.0% 13.4% 13.8% 14.1% 14.5% 14.9% 15.2% 15.6% 15.9% 16.3% 16.7% 17.0% 17.4% 17.8% 18.1% 18.5% 18.8% 19.2% 19.6% 19.9% 20.3% 20.7% 21.0% 21.4% 21.7% 22.1% 22.5% 22.8% 23.2% 23.6% 23.9% 24.3% 24.6% 25.0% 25.4% 25.7% 26.1% 26.4% 26.8% 27.2% 27.5% 27.9% 28.3% 28.6% 29.0% 29.3% 29.7% 30.1% 30.4% 30.8% 31.2% 31.5% 31.9% 32.2% 32.6% 33.0% 33.3% 33.7% 34.1% 34.4% 34.8% 35.1% 35.5% 35.9% 36.2% 36.6% 37.0% 37.3% 37.7% 38.0% 38.4% 38.8% 39.1% 39.5% 39.9% 40.2% 40.6% 40.9% 41.3% 41.7% 42.0% 42.4% 42.8% 43.1% 43.5% 43.8% 44.2% 44.6% 44.9% 45.3% 45.7% 46.0% 46.4% 46.7% 47.1% 47.5% 47.8% 48.2% 48.6% 48.9% 49.3% 49.6% 50.0% 50.4% 50.7% 51.1% 51.4% 51.8% 52.2% 52.5% 52.9% 53.3% 53.6% 54.0% 54.3% 54.7% 55.1% 55.4% 55.8% 56.2% 56.5% 56.9% 57.2% 57.6% 58.0% 58.3% 58.7% 59.1% 59.4% 59.8% 60.1% 60.5% 60.9% 61.2% 61.6% 62.0% 62.3% 62.7% 63.0% 63.4% 63.8% 64.1% 64.5% 64.9% 65.2% 65.6% 65.9% 66.3% 66.7% 67.0% 67.4% 67.8% 68.1% 68.5% 68.8% 69.2% 69.6% 69.9% 70.3% 70.7% 71.0% 71.4% 71.7% 72.1% 72.5% 72.8% 73.2% 73.6% 73.9% 74.3% 74.6% 75.0% 75.4% 75.7% 76.1% 76.4% 76.8% 77.2% 77.5% 77.9% 78.3% 78.6% 79.0% 79.3% 79.7% 80.1% 80.4% 80.8% 81.2% 81.5% 81.9% 82.2% 82.6% 83.0% 83.3% 83.7% 84.1% 84.4% 84.8% 85.1% 85.5% 85.9% 86.2% 86.6% 87.0% 87.3% 87.7% 88.0% 88.4% 88.8% 89.1% 89.5% 89.9% 90.2% 90.6% 90.9% 91.3% 91.7% 92.0% 92.4% 92.8% 93.1% 93.5% 93.8% 94.2% 94.6% 94.9% 95.3% 95.7% 96.0% 96.4% 96.7% 97.1% 97.5% 97.8% 98.2% 98.6% 98.9% 99.3% 99.6% 100.0%
Basically, here is what I need to be done.
a. Start from second column which is 0.4% here.
b. Go until you hit "10" in the header name. If the header name is exactly 10.0%, then include that column too. If not, only include until the column before it. In this example, since we have 10.1% (29th column), we will be including columns starting from 0.4%(second) until 9.8% which is the 28th column. If the 29th column was to be 10.0%, then it would have been included too.
c. Average the values for these respective columns in the second row (data is not presented here - please click this link for total dataset - https://goo.gl/W8jND7). In this example, starting from 0.4%(second column) till 9.8%(28th column).
d. In the output, print first column which is "Gene", and this average value with column header being
Gene Average_10%
e. Then start from 10.1% (29th column) and check until you hit "20" in the header name. Repeat steps b through d. And print output as
Gene Average_10% Average_20%
Repeat this until you have
Gene Average_10% Average_20% Average_30% Average_40% Average_50% Average_60% Average_70% Average_80% Average_90% Average_100%
f. After you hit 100%, it means one dataset is done.
g. If you observe my column header carefully here, there is another 0.4%-100% columns after the first 100%. I will be having 13 of these 0.4%-100%s in the input file at the above link.
i. I have multiple files, the headers can be
1% 2% 3%....100%
1.5% 2.5% 3.5%....100%
It varies from file to file. But the logic of averaging(if you hit "10", "20", etc) is always the same. And the number of samples 13 is also same which means each file will have 100%s for 13 times.
I should say, it's a horrible format for this task. I don't expect anyone to come up with a final solution for you but this is how I would approach this
awk 'NR == 1 {
gsub("%","");
for (f=2; f<=NF; f++) {
for (i=1; i<10; i++)
if ($f<10*i && $(f+1)>=10*i) print f, $f
if ($f==100) print f, $f
}}' file
28 9.8
56 19.9
83 29.7
111 39.9
138 49.6
166 59.8
194 69.9
221 79.7
249 89.9
277 100.0
304 9.8
332 19.9
359 29.7
387 39.9
414 49.6
442 59.8
470 69.9
497 79.7
525 89.9
553 100.0
here printing the column index and the threshold used for verification purposes. Once you have the column boundaries extracted it should be straightforward to sum the respective columns. Note that by your logic 100% should be never included, however it seems wrong so I have special case for it.
I'm building a web crawler in Node.js using the npm crawler package. My program right now creates 5 child processes which each instantiate a new Crawler, which crawls a list of URLS which the parent provides.
When it runs for about 15-20 minutes, it slows down to a halt and the processes' STATE column from the output of the top command reads stuck for all the children. [see below]
I have little knowledge of the top command, and the columns provided, but I want to know is there a way to find out what is causing the processes to slow down by looking at the output of top? I realize that it is probably my code that has a bug in it, but I want to know where I should start debugging: memory leak, caching issue, not enough children, too many children, etc.
Below is the entire output of top
PID COMMAND %CPU TIME #TH #WQ #PORT #MREG MEM RPRVT PURG CMPRS VPRVT VSIZE PGRP PPID STATE UID FAULTS COW MSGSENT MSGRECV SYSBSD SYSMACH
11615 node 2.0 17:16.43 8 0 42 2519 94M- 94M- 0B 1347M+ 1538M 4150M 11610 11610 stuck 541697072 14789409+ 218 168 21 6481040 63691
11614 node 2.0 16:57.66 8 0 42 2448 47M- 47M- 0B 1360M+ 1498M- 4123M 11610 11610 stuck 541697072 14956093+ 217 151 21 5707766 64937
11613 node 4.4 17:17.37 8 0 44 2415 100M+ 100M+ 0B 1292M- 1485M 4114M 11610 11610 sleeping 541697072 14896418+ 215 181 22 6881669+ 66098+
11612 node 10.3 17:37.81 8 0 42 2478 24M+ 24M+ 0B 1400M- 1512M 4129M 11610 11610 stuck 541697072 14386703+ 215 171 21 7083645+ 65551
11611 node 2.0 17:09.52 8 0 42 2424 68M- 68M- 0B 1321M+ 1483M 4111M 11610 11610 sleeping 541697072 14504735+ 220 168 21 6355162 63701
11610 node 0.0 00:04.63 8 0 42 208 4096B 0B 0B 126M 227M 3107M 11610 11446 sleeping 541697072 45184 410 52 21 36376 6939
Here are the dependencies:
├── colors#0.6.2
├── crawler#0.2.6
├── log-symbols#1.0.0
├── robots#0.9.4
└── sitemapper#0.0.1
Sitemapper is one I wrote myself which could be a source for bugs.