What is the best method to reset a Susy grid container? I have a grid with a couple of containers under each other, like this:
+-----------------+
| [container] |
| [container] |
| [container] |
| |
+-----------------+
Now on the front page, I want one container to be full width:
+-----------------+
| [container] |
|[ container ]|
| [container] |
| |
+-----------------+
Currently I am resetting the container like this:
body.front {
#main {
width: 100%;
max-width: none;
margin: 0;
padding: 0;
}
}
However, seeing that Susy already contains reset mixins like reset-columns I'm wondering if this is the recommended way to do it.
Yes, this is right. There is no reset-container mixin currently.
Related
Basically I want to establish website with the LXC. so I installed LXD and created LXC called app1, then installed apache2. All are running, but when I use the LXC IP on the browser it gives "This site can’t be reached", I disabled the ufw even though I removed it but nothing happen.
Here are the commands that I did to test with their results:
$ sudo lxc list
| app1 | RUNNING | 10.221.72.14 (eth0) | fd42:969c:2638:6357:216:3eff:fe59:efd7 (eth0) | CONTAINER | 0
$ sudo lxc network ls
| br0 | bridge | NO | | 0 |
+--------+----------+---------+-------------+---------+
| ens3 | physical | NO | | 0 |
+--------+----------+---------+-------------+---------+
| lxdbr0 | bridge | YES | | 2 |
+--------+----------+---------+-------------+---------+
| virbr0 | bridge | NO | | 0 |
$ sudo lxc network show lxdbr0
config:
ipv4.address: 10.221.72.1/24
ipv4.nat: "true"
ipv6.address: fd42:969c:2638:6357::1/64
ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/app1
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
I have a node.js api app on azure. I use bunyan to log every request to sdtout. How can I save and read the log files? I enabled BLOB-logging. The only thing that shows up in my storage is a bunch of csv-files. Here is an example:
| date | level | applicationName | instanceId | eventId | pid | tid | message
_______________________________________________________________________________________________________________________________________________________________
| 2017-05-17T14:21:15 | Verbose | myApp | tae9d6 | 636306276755847146 | 13192 | -1 | SnapshotHelper::RestoreSnapshotInternal SUCCESS - File.Copy
| 2017-05-17T14:21:15 | Verbose | myApp | tae9d6 | 636306276756784690 | 13192 | -1 | SnapshotHelper::RestoreSnapshotInternal SUCCESS - process
Where are my logs, that I printed to stdout?
1) Create file iisnode.yml in your root folder (D:\home\site\wwwroot) if not exists.
2) Add the following lines to it.
loggingEnabled: true
logDirectory: iisnode
After that done, you can find logs in D:\home\site\wwwroot\iisnode.
For more info, please refer to https://learn.microsoft.com/en-us/azure/app-service-web/web-sites-nodejs-debug#enable-logging.
After above settings in iisnode.yml, the logs you see D:\home\site\wwwroot\iisnode are from BLOB storage or file system.
Here is a part of my code.
for (i=0;i<29;i++)
{
*reg0= (unsigned int)(src[i][0]<<24 | (src[i][1]<<16) | (src[i][2]<<8) | src[i][3]);
printk("");
*reg4= (unsigned int)(src[i][4]<<24 | (src[i][5]<<16) | (src[i][6]<<8) | src[i][7]);
printk("");
*reg8= (unsigned int)(src[i][8]<<24 | (src[i][9]<<16) | (src[i][10]<<8) | src[i][11]);
printk("");
*reg12= (unsigned int)(src[i][12]<<24 | (src[i][13]<<16) | (src[i][14]<<8) | 0);
printk("");
}
it is a for loop used in a module built under petalinux. it works very well after doing insmod and execution the user application. But when I comment printk("") the module doesn't work any more and it seems like it gets lost in recursion. I have to suppress printk becase it takes much time execution.
Any help would be appreciated.
regards.
The Problem
Connecting directly through redis-cli to my twemproxy will correctly proxy me over to redis without any issues/disconnects. However, when I use node-redis to connect to twemproxy I get the following error:
[Error: Redis connection gone from end event.]
Trace is as follows:
Error: Ready check failed: Redis connection gone from end event.
at RedisClient.on_info_cmd (/home/vagrant/tests/write-tests/node_mo
dules/redis/index.js:368:35)
at Command.callback (/home/vagrant/tests/write-tests/node_modules/r
edis/index.js:418:14)
at RedisClient.flush_and_error (/home/vagrant/tests/write-tests/nod
e_modules/redis/index.js:160:29)
at RedisClient.connection_gone (/home/vagrant/tests/write-tests/nod
e_modules/redis/index.js:474:10)
at Socket.<anonymous> (/home/vagrant/tests/write-tests/node_modules
/redis/index.js:103:14)
at Socket.EventEmitter.emit (events.js:117:20)
at _stream_readable.js:919:16
at process._tickCallback (node.js:419:13)
This error occurs whether or not the redis-server is even running, so I am pretty sure it has to do with how node-redis and twemproxy are interacting. Or not interacting, as the case may be.
My Question
Just what the heck is happening?
Background Information
I've got a simple test setup that is as follows:
+------------------+
| +----+----+ |
| | r1 + r2 + |
| +----+----+ |
| | | |
| +---------+ |
| |twemproxy| |
| +---------+ |
| / | \ |
| +----+----+----+ |
| | aw | aw | aw | |
| +----+----+----+ |
+------------------+
aw = api worker
r1/r2 = redis instance
twemproxy = twemproxy
the aw's are currently nodejs clustered on the same host
r1/r2 are instances of node, again on the same host
node version 0.10.x
all three machines are running with very sparse vagrant file. Static IPs assigned to each one for now, private network. Each machine is reachable from every other machine on the specified ports.
After a bit of poking, I realize it is because node_redis attempts to call the "info" command on connection on default.
Simply modifying the connection options to include no_ready_check: true will solve this issue and force the connection through twemproxy.
My assumption is that any module tested using Intern will automatically be covered by Istanbul's code coverage. For reasons unknown to me, my module is not being included.
I am:
running Intern 1.6.2 (installed with npm locally)
testing NodeJS code
using callbacks, not promises
using CommonJS modules, not AMD modules
Directory Structure (only showing relevant files):
plister
|
|--libraries
| |--file-type-support.js
|
|--tests
| |--intern.js
| |--unit
| |--file-type-support.js
|
|--node_modules
|--intern
plister/tests/intern.js
define({
useLoader: {
'host-node': 'dojo/dojo'
},
loader: {
packages: [
{name: 'libraries', location: 'libraries'}
]
},
reporters: ['console'],
suites: ['tests/unit/file-type-support'],
functionalSuites: [],
excludeInstrumentation: /^(tests|node_modules)\//
});
plister/tests/unit/file-type-support.js
define([
'intern!bdd',
'intern/chai!expect',
'intern/dojo/node!fs',
'intern/dojo/node!path',
'intern/dojo/node!stream-equal',
'intern/dojo/node!../../libraries/file-type-support'
], function (bdd, expect, fs, path, streamEqual, fileTypeSupport) {
'use strict';
bdd.describe('file-type-support', function doTest() {
bdd.it('should show that the example output.plist matches the ' +
'temp.plist generated by the module', function () {
var deferred = this.async(),
input = path.normalize('tests/resources/input.plist'),
output = path.normalize('tests/resources/output.plist'),
temporary = path.normalize('tests/resources/temp.plist');
// Test deactivate function by checking output produced by
// function against test output.
fileTypeSupport.deactivate(fs.createReadStream(input),
fs.createWriteStream(temporary),
deferred.rejectOnError(function onFinish() {
streamEqual(fs.createReadStream(output),
fs.createReadStream(temporary),
deferred.callback(function checkEqual(error, equal) {
expect(equal).to.be.true;
}));
}));
});
});
});
Output:
PASS: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (29ms)
1/1 tests passed
1/1 tests passed
Output (on failure):
FAIL: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (30ms)
AssertionError: expected true to be false
AssertionError: expected true to be false
0/1 tests passed
0/1 tests passed
npm ERR! Test failed. See above for more details.
npm ERR! not ok code 0
Output (after removing excludeInstrumentation):
PASS: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (25ms)
1/1 tests passed
1/1 tests passed
------------------------------------------+-----------+-----------+-----------+-----------+
File | % Stmts |% Branches | % Funcs | % Lines |
------------------------------------------+-----------+-----------+-----------+-----------+
node_modules/intern/ | 70 | 50 | 100 | 70 |
chai.js | 70 | 50 | 100 | 70 |
node_modules/intern/lib/ | 79.71 | 42.86 | 72.22 | 79.71 |
Test.js | 79.71 | 42.86 | 72.22 | 79.71 |
node_modules/intern/lib/interfaces/ | 80 | 50 | 63.64 | 80 |
bdd.js | 100 | 100 | 100 | 100 |
tdd.js | 76.19 | 50 | 55.56 | 76.19 |
node_modules/intern/lib/reporters/ | 56.52 | 35 | 57.14 | 56.52 |
console.js | 56.52 | 35 | 57.14 | 56.52 |
node_modules/intern/node_modules/chai/ | 37.9 | 8.73 | 26.38 | 39.34 |
chai.js | 37.9 | 8.73 | 26.38 | 39.34 |
tests/unit/ | 100 | 100 | 100 | 100 |
file-type-support.js | 100 | 100 | 100 | 100 |
------------------------------------------+-----------+-----------+-----------+-----------+
All files | 42.14 | 11.35 | 33.45 | 43.63 |
------------------------------------------+-----------+-----------+-----------+-----------+
My module passes the test and I can make it fail too. It just will not show up in the code coverage. I have done the tutorial hosted on GitHub without any problems.
I tried dissecting the Istanbul and Intern dependencies. I place a console.log where it seems files to be covered go through, but my module doesn't get passed. I have tried every variation of deferred.callback and deferred.rejectOnError with no difference to the code coverage.
Also, any feedback on my use of deferred.callback and deferred.rejectOnError will be greatly appreciated. I am still a little uncertain on their usage.
Thanks!
As of Intern 1.6, only require('vm').runInThisContext is hooked to add code coverage data, not require. Instrumentation of require was added in Intern 2.0.
The use of callback/rejectOnError in the above code is correct.