I'm trying to send a file to the browser to download, and not having a luck using $this->response->send_file($file_path); in the controller.
I get the following error:
ErrorException [ Warning ]: finfo::file() [<a href='finfo.file'>finfo.file</a>]: Empty filename or path
$file_path can be either absolute or relative path, but I still get the same error. After looking at the Kohana code for this implementation I just can't figure out how this should work.
The following code will shows how a base filename (eg, filename.ext) is passed into File::mime() - which is wrong
https://github.com/kohana/core/blob/3.2/develop/classes/kohana/response.php#L434-453
// Get the complete file path
$filename = realpath($filename);
if (empty($download))
{
// Use the file name as the download file name
$download = pathinfo($filename, PATHINFO_BASENAME);
}
// Get the file size
$size = filesize($filename);
if ( ! isset($mime))
{
// Get the mime type
// HERE'S THE ISSUE!!!
$mime = File::mime($download);
}
File::mime expects the filepath to be absolute or relative path on the filesystem, but $download will only ever be a base filename (eg filename.ext);
The only solution that works for me right now is to change the code in the send_file() method 'classes/kohana/response.php'
from File::mime($download);
to $mime = File::mime($filename);.
Kohana 3.3 has changed this implementation to:
$mime = File::mime_by_ext(pathinfo($download, PATHINFO_EXTENSION));
Essentially send_file does not work in 3.2 without this fix. Is this a bug, or what am I missing here?
I was using and linking to the 3.2 develop branch. This issue does not exist in the 3.2 master branch.
For those interested, follow discussion on this pull request to view the eventual fix: https://github.com/kohana/core/pull/183
Related
I setup Neovim LSP using the nvim-lspconfig and the lsp-installer where I also installed the pyright server.
Without any further configuration it worked out of the box. However when I have a class in a subfolder and add a new method, pyright does not recognize this method when I want to access it in a different file. When I restart neovim, or open and close the file, pyright suddenly recognizes the newly added method.
I also tried :LspRestart with no effect.
I tried to add some settings to the pyright server:
return {
settings = {
python = {
analysis = {
autoSearchPaths = true,
diagnosticMode = "workspace",
useLibraryCodeForTypes = true,
}
}
},
}
But this also had no effect.
:LspLog also does not show anything which could point to the issue:
[START][2022-07-15 11:11:05] LSP logging initiated
[WARN][2022-07-15 11:11:09] ...lsp/handlers.lua:109 "The language server pyright triggers a registerCapability handler despite dynamicRegistration set to false. Report upstream, this warning is harmless"
[WARN][2022-07-15 11:11:09] ...lsp/handlers.lua:456 "stubPath typings is not a valid directory."
[WARN][2022-07-15 11:11:20] ...lsp/handlers.lua:109 "The language server pyright triggers a registerCapability handler despite dynamicRegistration set to false. Report upstream, this warning is harmless"
I also could not find any setting regarding to this issue here which could solve this.
Since I am new to python, the way I import and structure classes might not be common and might be an issue which could cause this problem.
As a main entry point I have main.py in the root folder
All other source files are in a program/ folder which does not have a __init__.py
Inside program/ there are folders which each have a __init__.py file f.e. core/
core/__init__.py:
from .myClass import myClass
and in main.py I import it like this:
from subfolder.core import myClass
myClass.newMethod() # this is only recognized by lsp/pyright after the file is closed and reopen
Is the issue a bug in pyright (not likely I guess), a missing setting or my strange folder/import structure?
Can you try this: create (or modify) pyproject.toml, put it in the project root directory. Inside pyproject.toml, add the following lines:
[tool.pyright]
extraPaths = ["program/core" ,"program/directory_2", "program/directory_3"]
The idea is that you have to add the sub directories manually, which is really tedious but at least it works in my case.
I'm trying to install external binary inside NixOS, using declarative ways. Inside nix-pkg manual, I found such way of getting external binary inside NixOS
{ pkgs ? import <nixpkgs> {} }:
pkgs.stdenv.mkDerivation {
name = "goss";
src = pkgs.fetchurl {
url = "https://github.com/aelsabbahy/goss/releases/download/v0.3.13/goss-linux-amd64";
sha256 = "1q0kfdbifffszikcl0warzmqvsbx4bg19l9a3vv6yww2jvzj4dgb";
};
phases = ["installPhase"];
installPhase = ''
'';
But I'm wondering, what should I add inside InstallPhase, to make this binary being installed inside the system?
This seems to be an open source Go application, so it's preferable to use Nixpkgs' Go support instead, which may be more straightforward than patching a binary.
That said, installPhase is responsible creating the $out path; typically mkdir -p $out/bin followed by cp, make install or similar commands.
So that's not actually installing it into the system; after all Nix derivations are not supposed to have side effects. "Installing" it into the system is the responsibility of NixOS's derivations, as configured by you.
You could say that 'installation' is the combination of modifying the NixOS configuration + switching to the new NixOS. I tend to think about the modification to the configuration only; the build and switch feel like implementation details, even though nixos-rebuild is usually a manual operation.
Example:
installPhase = ''
install -D $src $out/bin/goss
chmod a+x $out/bin/goss
'';
Normally chmod would be done to a local file by the build phase, but we don't really need that phase here.
I have no idea why this was so hard to figure out. Having robust configuration systems is fine, but at the end of the day sometimes you just need to be able to download and expose a single flipping file on the $PATH.
The result of fetchurl is "the unaltered contents of the URL within the Nix store", which is being used for the src. So in installPhase, $src points to the downloaded data, and you just have to copy/install/link that into $out/…..
pkgs.stdenv.mkDerivation {
name = "hello_static";
src = pkgs.fetchurl {
name = "hello_static";
url = "https://raw.githubusercontent.com/ruanyf/simple-bash-scripts/6e837f949010e0f5e9305e629da946de12cc63e8/scripts/hello-world.sh";
sha256 = "sha256:somE27ajbm0TtWv9tyeqTWDW3gbIs6xvlcFS9QS1ZJ0=";
};
phases = [ "installPhase" ];
installPhase = ''
install -D $src $out/bin/hello_static
'';
};
My python version is 3.5 through Anaconda on Windows 10 environment. I'm using Pyminizip because I need password protected for my zip files, and Zipfile doesn't support it yet.
I am able to zip single file through the function pyminizip.compress, and the encrypt function worked as expected. However, when trying to use pyminizip.compress_multiple I always encountered a Python crash (as pictures) and I believe it's due to the problem of my bad input format.
What I would like to know is: What's the acceptable format for input argument src file LIST path? From Pyminizip's documentation:
pyminizip.compress_multiple([u'pyminizip.so', 'file2.txt'], "file.zip", "1233", 4, progress)
Args:
1. src file LIST path (list)
2. dst file path (string)
3. password (string) or None (to create no-password zip)
4. compress_level(int) between 1 to 9, 1 (more fast) <---> 9 (more compress)
It seems the first argument src file LIST path should be a list containing all files required to be zipped. Accordingly, I tried to use compress_multiple to compress single file with command:
pyminizip.compress_multiple( ['Filename.txt'], 'output.zip', 'password', 4, optional)
and it lead to Python crash. So I try to add a full path into the args.
pyminizip.compress_multiple( [os.getcwd(), 'Filename.txt'], ... )
and still, it crashed again. So I think maybe I have to split the path like this
path = os.getcwd().split( os.sep )
pyminizip.compress_multiple( [path, 'Filename.txt'], ...)
still got a bad luck. Any ideas?
Pyminizip requires the path name (or relative path name from where the script is running from) in the files.
Your example:
pyminizip.compress_multiple( [os.getcwd(), 'Filename.txt'], ... )
gives a list of files of os.getcwd(), and then another file, 'Filename.txt'. You need to combine them into a single path using os.path.join()
in your filename example, you will need:
pyminizip.compress_multiple( [os.path.join(getcwd(), 'Filename.txt')],...)
conversly:
pyminizip.compress_multiple( [os.path.join(getcwd(), 'Filename1.txt'), os.path.join(getcwd(), 'Filename2.txt')],...)
From here - https://pypi.org/project/pyminizip/, the usage of compress_multiple is
pyminizip.compress_multiple([u'pyminizip.so', 'file2.txt'], [u'/path_for_file1', u'/path_for_file2'], "file.zip", "1233", 4, progress)
The second parameter is a bit confusing, but if used, it will create a zip file, which when uncompressed, will create a directory structure like:
I have an uglify function that creates a file lib-0.1.4-min.js and then symlinks that to lib-production-min.js. 0.1.4 is the current version.
due to synchronization of this directory, sometimes the lib-production-min.js is a broken link.
when I run the compile function, fs.existsSync( "lib-production-min.js" ) returns false. when I try to create the symlink later, node errs out with file already exists.
var version = 'lib-0.1.4-min.j';
var prod = 'lib-production-min.js';
// if production exists, get rid of it
if( fs.existsSync(prod) ) fs.unlinkSync( prod ); // not exists - not deleted
// link version to production
fs.symlinkSync( version, prod ); // ERROR: file already exists
how do I check if this deadlink is in the directory?
will normal fs.unlinkSync( "lib-production-min.js" ) delete it?
fs.lstat() or fs.lstatSync() might help you. They are supposed to bring the information about the link itself, not following it.
Use fs.readlinkSync(symlinkPath) to get the file pointed by the symlink, and then use fs.existsSync with that path.
The problem is that the link file exists, is the destination of the link the one that is missing.
I'm writing a groovy script that I want to be controlled via a properties file stored in the same folder. However, I want to be able to call this script from anywhere. When I run the script it always looks for the properties file based on where it is run from, not where the script is.
How can I access the path of the script file from within the script?
You are correct that new File(".").getCanonicalPath() does not work. That returns the working directory.
To get the script directory
scriptDir = new File(getClass().protectionDomain.codeSource.location.path).parent
To get the script file path
scriptFile = getClass().protectionDomain.codeSource.location.path
As of Groovy 2.3.0 the #SourceURI annotation can be used to populate a variable with the URI of the script's location. This URI can then be used to get the path to the script:
import groovy.transform.SourceURI
import java.nio.file.Path
import java.nio.file.Paths
#SourceURI
URI sourceUri
Path scriptLocation = Paths.get(sourceUri)
Note that this will only work if the URI is a file: URI (or another URI scheme type with an installed FileSystemProvider), otherwise a FileSystemNotFoundException will be thrown by the Paths.get(URI) call. In particular, certain Groovy runtimes such as groovyshell and nextflow return a data: URI, which will not typically match an installed FileSystemProvider.
This makes sense if you are running the Groovy code as a script, otherwise the whole idea gets a little confusing, IMO. The workaround is here: https://issues.apache.org/jira/browse/GROOVY-1642
Basically this involves changing startGroovy.sh to pass in the location of the Groovy script as an environment variable.
As long as this information is not provided directly by Groovy, it's possible to modify the groovy.(sh|bat) starter script to make this property available as system property:
For unix boxes just change $GROOVY_HOME/bin/groovy (the sh script) to do
export JAVA_OPTS="$JAVA_OPTS -Dscript.name=$0"
before calling startGroovy
For Windows:
In startGroovy.bat add the following 2 lines right after the line with
the :init label (just before the parameter slurping starts):
#rem get name of script to launch with full path
set GROOVY_SCRIPT_NAME=%~f1
A bit further down in the batch file after the line that says "set
JAVA_OPTS=%JAVA_OPTS% -Dgroovy.starter.conf="%STARTER_CONF%" add the
line
set JAVA_OPTS=%JAVA_OPTS% -Dscript.name="%GROOVY_SCRIPT_NAME%"
For gradle user
I have same issue when I'm starting to work with gradle. I want to compile my thrift by remote thrift compiler (custom by my company).
Below is how I solved my issue:
task compileThrift {
doLast {
def projectLocation = projectDir.getAbsolutePath(); // HERE is what you've been looking for.
ssh.run {
session(remotes.compilerServer) {
// Delete existing thrift file.
cleanGeneratedFiles()
new File("$projectLocation/thrift/").eachFile() { f ->
def fileName=f.getName()
if(f.absolutePath.endsWith(".thrift")){
put from: f, into: "$compilerLocation/$fileName"
}
}
execute "mkdir -p $compilerLocation/gen-java"
def compileResult = execute "bash $compilerLocation/genjar $serviceName", logging: 'stdout', pty: true
assert compileResult.contains('SUCCESSFUL')
get from: "$compilerLocation/$serviceName" + '.jar', into: "$projectLocation/libs/"
}
}
}
}
One more solution. It works perfect even you run the script using GrovyConsole
File getScriptFile(){
new File(this.class.classLoader.getResourceLoader().loadGroovySource(this.class.name).toURI())
}
println getScriptFile()
workaround: for us it was running in an ANT environment and storing some location parent (knowing the subpath) in the Java environment properties (System.setProperty( "dirAncestor", "/foo" )) we could access the dir ancestor via Groovy's properties.get('dirAncestor').
maybe this will help for some scenarios mentioned here.