How can a custom snapshot location be used? - haskell

I use stack Haskell tool. It caches compiled packages in snapshot folders like:
Users
+-- ...
+-- Its_Me
+-- ...
+-- .stack
+-- ...
+-- snapshots
+-- aarch64-osx
| +-- ...
+-- x86_64-osx
+-- ...
+-- b673acf89... <== my project uses this
So, is there a way to tell a stack to use not /Users/Its_Me/.stack/snapshots/x86_64-osx/b673acf89... but, for example, /Users/Its_Me/.stack/snapshots/x86_64-osx/I_WANT_HERE ?
Because it only comes to my mind to change the whole stack root to be honest...
PS. By the way, this hash - b673acf89... - it is a hash of what exactly?

As I read stack's documentation, they don't mention how the snapshot folder is organized, nor provide any options to directly influence it. Therefore I would assume it's an internal logic, and I wouldn't rely on it in a longer term.
The closest you can get to having an isolated snapshot directory would be to provide a custom snapshot, but that doesn't give you any control over the folder directory.

Related

Mono on Linux: Event Logging

I am working on getting C# applications written for Windows to run on Linux using Mono. I am using Mono 5.18.0.240 from the Mono repository, on Ubuntu 18.04.1.
My understanding is that Mono includes a local file-based event logger. By setting the environment variable MONO_EVENTLOG_TYPE to local (followed by an optional path), events are logged to a file-based log. However, logged events seem to not be sorted into the correct source directory that gets created. This makes it so that all events are logged to the same directory, which makes it more difficult to navigate through the files should many events be logged.
Consider this C# program that just logs two events each for three event sources:
using System;
using System.Diagnostics;
namespace EventLogTest
{
class Program
{
public static void Main()
{
var sources = new string[] { "source1", "source2", "source3" };
foreach(var source in sources){
if(! EventLog.SourceExists(source)) EventLog.CreateEventSource(source, "Application");
EventLog log = new EventLog();
log.Source = source;
log.WriteEntry("some event");
log.WriteEntry("another event");
}
}
}
}
We can build the program into an executable and then run it:
$ csc events.cs
Microsoft (R) Visual C# Compiler version 2.8.2.62916 (2ad4aabc)
Copyright (C) Microsoft Corporation. All rights reserved.
$ MONO_EVENTLOG_TYPE=local:./eventlog mono ./events.exe
The resulting structure of the eventlog directory looks like this:
$ tree ./eventlog
./eventlog
└── Application
├── 1.log
├── 2.log
├── 3.log
├── 4.log
├── 5.log
├── 6.log
├── Application
├── source1
├── source2
└── source3
5 directories, 6 files
Note that the directories source1, source2, and source3 were created, but the six log files were placed in the top level Application directory instead of the source directories. If we look at the source field of each log file, we can see that the source is correct:
$ grep -a Source ./eventlog/Application/*.log
eventlog/Application/1.log:Source: source1
eventlog/Application/2.log:Source: source1
eventlog/Application/3.log:Source: source2
eventlog/Application/4.log:Source: source2
eventlog/Application/5.log:Source: source3
eventlog/Application/6.log:Source: source3
My expectation is that the above directory structure should look like this instead, considering that each event log source had two events written (and I don't see the point of the second Application directory):
./eventlog
└── Application
├── source1
│   ├── 1.log
│   └── 2.log
├── source2
│   ├── 1.log
│   └── 2.log
└── source3
├── 1.log
└── 2.log
Now, I know that the obvious solution might be to use a logging solution other than Mono's built-in event logging. However, at this point, it is important that I stick with the built-in tools available.
Is there a way to configure Mono's built-in local event logging to save the events to log files in the relevant source directory, or is this possibly a bug in Mono?

HTTP Status 500 - /WebContent/olaMundo.xhtml Not Found in ExternalContext as a Resource [duplicate]

My JSF web application shows the following error:
/index.xhtml Not Found in ExternalContext as a Resource.
My directory structure is:
- Java Resource
-- src
--- br.com.k19.controle
---- NumeroAleatorioBean.java
--- resources
- JavaScript Resources
- build
- WebContent
-- META-INF
-- Web Pages
--- index.xhtml
--- formulario.xhtml
-- Web-Inf
Where do I need to put my /index.xhtml in this structure?
The WebContent folder represents the web content. You placed the index.xhtml file inside Web Pages subfolder so the right URL would be
http://localhost:8080/ProjectName/Web Pages/index.xhtml
and thus not
http://localhost:8080/ProjectName/index.xhtml
as you seemed to expect.
If you want to have it on the context root, just get rid of the Web Pages folder altogether and move those .xhtml files directly inside WebContent folder, in the same level as META-INF and WEB-INF:
ProjectName
|-- Java Resources
| `-- src
| `-- br.com.k19.controle
| `-- NumeroAleatorioBean.java
|-- resources
|-- JavaScript Resources
|-- build
`-- WebContent
|-- META-INF
|-- WEB-INF
| |-- faces-config.xml
| `-- web.xml
|-- index.xhtml
`-- formulario.xhtml
Note: Java is case sensitive. Web-Inf is definitely not the same as WEB-INF. Be careful or you'll have a security hole.
I faced this issue recently. I tried cleaning the Tomcat Work Directory, Clean, and Publish.
After that, when I started the server, the application was running smooth.
For independent tomcat server, clean dir --> temp, work, and inside webApp directory, remove all the existing unarchived project.
Restart the Tomcat, it worked for me.

Read all files in a nested folder in Spark

If we have a folder folder having all .txt files, we can read them all using sc.textFile("folder/*.txt"). But what if I have a folder folder containing even more folders named datewise, like, 03, 04, ..., which further contain some .log files. How do I read these in Spark?
In my case, the structure is even more nested & complex, so a general answer is preferred.
If directory structure is regular, lets say something like this:
folder
├── a
│   ├── a
│   │   └── aa.txt
│   └── b
│   └── ab.txt
└── b
├── a
│   └── ba.txt
└── b
└── bb.txt
you can use * wildcard for each level of nesting as shown below:
>>> sc.wholeTextFiles("/folder/*/*/*.txt").map(lambda x: x[0]).collect()
[u'file:/folder/a/a/aa.txt',
u'file:/folder/a/b/ab.txt',
u'file:/folder/b/a/ba.txt',
u'file:/folder/b/b/bb.txt']
Spark 3.0 provides an option recursiveFileLookup to load files from recursive subfolders.
val df= sparkSession.read
.option("recursiveFileLookup","true")
.option("header","true")
.csv("src/main/resources/nested")
This recursively loads the files from src/main/resources/nested and it's subfolders.
if you want use only files which start with name "a" ,you can use
sc.wholeTextFiles("/folder/a*/*/*.txt") or sc.wholeTextFiles("/folder/a*/a*/*.txt")
as well. We can use * as wildcard.
sc.wholeTextFiles("/directory/201910*/part-*.lzo") get all match files name, not files content.
if you want to load the contents of all matched files in a directory, you should use
sc.textFile("/directory/201910*/part-*.lzo")
and setting reading directory recursive!
sc._jsc.hadoopConfiguration().set("mapreduce.input.fileinputformat.input.dir.recursive", "true")
TIPS: scala differ with python, below set use to scala!
sc.hadoopConfiguration.set("mapreduce.input.fileinputformat.input.dir.recursive", "true")

Use directories for partition pruning in Spark SQL

I have data files (json in this example but could also be avro) written in a directory structure like:
dataroot
+-- year=2015
+-- month=06
+-- day=01
+-- data1.json
+-- data2.json
+-- data3.json
+-- day=02
+-- data1.json
+-- data2.json
+-- data3.json
+-- month=07
+-- day=20
+-- data1.json
+-- data2.json
+-- data3.json
+-- day=21
+-- data1.json
+-- data2.json
+-- data3.json
+-- day=22
+-- data1.json
+-- data2.json
Using spark-sql I create a temporary table:
CREATE TEMPORARY TABLE dataTable
USING org.apache.spark.sql.json
OPTIONS (
path "dataroot/*"
)
Querying the table works well but I'm so far not able to use the directories for pruning.
Is there a way to register the directory structure as partitions (without using Hive) to avoid scanning the whole tree when I query? Say I want to compare data for the first day of every month and only read directories for these days.
With Apache Drill I can use directories as predicates during query time with dir0 etc. Is it possible to do something similar with Spark SQL?
As far as I know partitioning autodiscovery only works for parquet files in SparkSQL. See http://spark.apache.org/docs/latest/sql-programming-guide.html#partition-discovery
Use EXPLAIN to see the physical plan so which folder will be scanned.
Also, you can describe the partition when creating the table so Spark can use it.
I am not sure Spark 1.6 use correctly partition pruning, settings spark.sql.hive.convertMetastoreParquet to false, I can see it but to true (default), I can see Spark will scan all partitions (but this is not affecting performance at all).

/index.xhtml Not Found in ExternalContext as a Resource

My JSF web application shows the following error:
/index.xhtml Not Found in ExternalContext as a Resource.
My directory structure is:
- Java Resource
-- src
--- br.com.k19.controle
---- NumeroAleatorioBean.java
--- resources
- JavaScript Resources
- build
- WebContent
-- META-INF
-- Web Pages
--- index.xhtml
--- formulario.xhtml
-- Web-Inf
Where do I need to put my /index.xhtml in this structure?
The WebContent folder represents the web content. You placed the index.xhtml file inside Web Pages subfolder so the right URL would be
http://localhost:8080/ProjectName/Web Pages/index.xhtml
and thus not
http://localhost:8080/ProjectName/index.xhtml
as you seemed to expect.
If you want to have it on the context root, just get rid of the Web Pages folder altogether and move those .xhtml files directly inside WebContent folder, in the same level as META-INF and WEB-INF:
ProjectName
|-- Java Resources
| `-- src
| `-- br.com.k19.controle
| `-- NumeroAleatorioBean.java
|-- resources
|-- JavaScript Resources
|-- build
`-- WebContent
|-- META-INF
|-- WEB-INF
| |-- faces-config.xml
| `-- web.xml
|-- index.xhtml
`-- formulario.xhtml
Note: Java is case sensitive. Web-Inf is definitely not the same as WEB-INF. Be careful or you'll have a security hole.
I faced this issue recently. I tried cleaning the Tomcat Work Directory, Clean, and Publish.
After that, when I started the server, the application was running smooth.
For independent tomcat server, clean dir --> temp, work, and inside webApp directory, remove all the existing unarchived project.
Restart the Tomcat, it worked for me.

Resources