I've been trying enable livereload for my coverage report.
My tool for generate this report is karma-coverage and I generate this report in the following folder: test/coverage/html-report
-test
|
|-- coverage
|
|-- report-html
|
|-- index.html
When the report was generated I use grunt-contrib-connect and serve the generated report using the following:
var COVERAGE_BASE = './test/coverage/report-html',
...
...
...
connect: {
server: {
options: {
port: 9000,
base: '.',
open: false
}
},
dist:{
options: {
keepalive: true,
port: 9000,
base: './dist',
open: false
}
},
coverage: {
options: {
keepalive: true,
port: 9001,
base: COVERAGE_BASE ,
open: false
}
}
}
When I execute my task grunt coverage, my configuraton it's
COVERAGE_TASKS = ['shell:test','open:coverage', 'connect:coverage','watch'];
grunt.registerTask('coverage', COVERAGE_TASKS);
My idea it's "intercept" the index.html and inject a script to livereload
<script src="//localhost:35729/livereload.js"></script>
Exist some way to intercept the index.html before serve with connect package and process it for adding this script?
May be this helps:
you can add middle ware that can be used to intercept request to your connect server like this:
grunt.initConfig({
connect: {
server: {
options: {
middleware: [
function myMiddleware(req, res, next) {
// DO YOUR THING HERE!!!!
res.end('Hello, world!');
}
],
},
},
},
});
Related
I am setting up an Astro site which will display data fetched from a simple service running on the same host but a different port.
The service is a simple Express app.
server.js:
const express = require('express')
const app = express()
const port = 3010
const response = {
message: "hello"
}
app.get('/api/all', (_req, res) => {
res.send(JSON.stringify(response))
})
app.listen(port, () => {
console.log(`listening on port ${port}`)
})
Since the service is running on port 3010, which is different from the Astro site, I configure a server proxy at the Vite level.
astro.config.mjs:
import { defineConfig } from 'astro/config';
import react from '#astrojs/react';
export default defineConfig({
integrations: [react()],
vite: {
optimizeDeps: {
esbuildOptions: {
define: {
global: 'globalThis'
}
}
},
server: {
proxy: {
'/api/all': 'http://localhost:3010'
}
}
},
});
Here is where I am trying to invoke the service.
index.astro:
---
const response = await fetch('/api/all');
const data = await response.json();
console.log(data);
---
When I run yarn dev I get this console output:
Response {
size: 0,
[Symbol(Body internals)]: {
body: Readable {
_readableState: [ReadableState],
_events: [Object: null prototype],
_eventsCount: 1,
_maxListeners: undefined,
_read: [Function (anonymous)],
[Symbol(kCapture)]: false
},
stream: Readable {
_readableState: [ReadableState],
_events: [Object: null prototype],
_eventsCount: 1,
_maxListeners: undefined,
_read: [Function (anonymous)],
[Symbol(kCapture)]: false
},
boundary: null,
disturbed: false,
error: null
},
[Symbol(Response internals)]: {
type: 'default',
url: undefined,
status: 404,
statusText: '',
headers: { date: 'Tue, 02 Aug 2022 19:41:02 GMT' },
counter: undefined,
highWaterMark: undefined
}
}
It looks like the network request is returning a 404.
I'm not seeing in the doc much more about server configuration.
Am I going about this the right way?
I have this working correctly with a vanilla Vite app and the same config/setup.
How can I proxy local service calls for an Astro application?
Short Answer
You cannot proxy service calls with Astro but also you don't have to
For direct resolution answer see section functional test without proxy
Details
Astro does not forward the server.proxy config to Vite (unless you patch your own version of Astro), the Astro Vite server config can be seen empty
proxy: {
// add proxies here
},
reference https://github.com/withastro/astro/blob/8c100a6fe6cc652c3799d1622e12c2c969f30510/packages/astro/src/core/create-vite.ts#L125
there is a merge of Astro server with Astro vite.server config but it does not take the proxy param. This is not obvious to get from the code, see tests later.
let result = commonConfig;
result = vite.mergeConfig(result, settings.config.vite || {});
result = vite.mergeConfig(result, commandConfig);
reference https://github.com/withastro/astro/blob/8c100a6fe6cc652c3799d1622e12c2c969f30510/packages/astro/src/core/create-vite.ts#L167
Tests
Config tests
I tried all possible combinations of how to input config to Astro and in each location a different port number to show which one takes an override
a vite.config.js file on root with
export default {
server: {
port:6000,
proxy: {
'/api': 'http://localhost:4000'
}
}
}
in two locations in the root file astro.config.mjs
server
vite.server
export default defineConfig({
server:{
port: 3000,
proxy: {
'/api': 'http://localhost:4000'
}
},
integrations: [int_test()],
vite: {
optimizeDeps: {
esbuildOptions: {
define: {
global: 'globalThis'
}
}
},
server: {
port:5000,
proxy: {
'/api': 'http://localhost:4000'
}
}
}
});
in an Astro integration
Astro has a so called integration that helps update the config (sort of Astro plugins) the integration helps identify what was finally kept in the config and also gives a last chance to update the config
integration-test.js
async function config_setup({ updateConfig, config, addPageExtension, command }) {
green_log(`astro:config:setup> running (${command})`)
updateConfig({
server:{proxy : {'/api': 'http://localhost:4000'}},
vite:{server:{proxy : {'/api': 'http://localhost:4000'}}}
})
console.log(config.server)
console.log(config.vite)
green_log(`astro:config:setup> end`)
}
this is the output log
astro:config:setup> running (dev)
{ host: false, port: 3000, streaming: true }
{
optimizeDeps: { esbuildOptions: { define: [Object] } },
server: { port: 5000, proxy: { '/api': 'http://localhost:4000' } }
}
astro:config:setup> end
the proxy parameter is removed from astro server config, the vite config is visible but has no effect as it is overridden, and not forwarded to Vite
test results
dev server runs on port 3000 which is from Astro config server all other configs overridden
the fetch api fails with the error
error Failed to parse URL from /api
File:
D:\dev\astro\astro-examples\24_api-proxy\D:\dev\astro\astro-examples\24_api-proxy\src\pages\index.astro:15:20
Stacktrace:
TypeError: Failed to parse URL from /api
at Object.fetch (node:internal/deps/undici/undici:11118:11)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
functional test without proxy
Given that Astro front matter runs on the server side, in SSG mode during build and in SSR mode on page load on the server then the server sends the result html, Astro has access to all host ports and can directly use the service port like this
const response = await fetch('http://localhost:4000/api');
const data = await response.json();
console.log(data);
The code above runs as expected without errors
Reference Example
All tests and files mentioned above are available on the reference example github repo : https://github.com/MicroWebStacks/astro-examples/tree/main/24_api-proxy
You can add your own proxy middleware with the astro:server:setup hook.
For example use http-proxy-middleware in the server setup hook.
// plugins/proxy-middleware.mjs
import { createProxyMiddleware } from "http-proxy-middleware"
export default (context, options) => {
const apiProxy = createProxyMiddleware(context, options)
return {
name: 'proxy',
hooks: {
'astro:server:setup': ({ server }) => {
server.middlewares.use(apiProxy)
}
}
}
}
Usage:
// astro.config.mjs
import { defineConfig } from 'astro/config';
import proxyMiddleware from './plugins/proxy-middleware.mjs';
// https://astro.build/config
export default defineConfig({
integrations: [
proxyMiddleware("/api/all", {
target: "http://localhost:3010",
changeOrigin: true,
}),
],
});
I have a parent Express app, and a Ghost app as a child app, using Ghost as an npm module here.
I routed Ghost to be rendered at http://localhost:9000/blog. All the configuration works fine (Ghost will throw an error if the basic configuration isn't being provided correctly).
Here is my Ghost startup code
ghost({
config: path.join(__dirname, '/config/ghost.config.js')
}).then(function (ghostServer) {
app.use(ghostServer.config.paths.subdir, ghostServer.rootApp);
ghostServer.start(app);
});
here is my Ghost config
// # Ghost Configuration
// Setup your Ghost install for various [environments](http://support.ghost.org/config/#about-environments).
// Ghost runs in `development` mode by default. Full documentation can be found at http://support.ghost.org/config/
var path = require('path'),
config;
config = {
// ### Production
// When running Ghost in the wild, use the production environment.
// Configure your URL and mail settings here
production: {
url: 'http://my-ghost-blog.com',
mail: {},
database: {
client: 'sqlite3',
connection: {
filename: path.join(__dirname, '/content/data/ghost.db')
},
debug: false
},
server: {
host: '127.0.0.1',
port: '2368'
}
},
// ### Development **(default)**
development: {
// The url to use when providing links to the site, E.g. in RSS and email.
// Change this to your Ghost blog's published URL.
url: 'http://localhost:9000/blog/',
// Example mail config
// Visit http://support.ghost.org/mail for instructions
// ```
// mail: {
// transport: 'SMTP',
// options: {
// service: 'Mailgun',
// auth: {
// user: '', // mailgun username
// pass: '' // mailgun password
// }
// }
// },
// ```
// #### Database
// Ghost supports sqlite3 (default), MySQL & PostgreSQL
database: {
client: 'sqlite3',
connection: {
filename: path.join(__dirname, '../blog/data/ghost-dev.db')
},
debug: false
},
// #### Server
// Can be host & port (default), or socket
server: {
// Host to be passed to node's `net.Server#listen()`
host: '127.0.0.1',
// Port to be passed to node's `net.Server#listen()`, for iisnode set this to `process.env.PORT`
port: '9000'
},
// #### Paths
// Specify where your content directory lives
paths: {
contentPath: path.join(__dirname, '../blog/')
}
},
// **Developers only need to edit below here**
// ### Testing
// Used when developing Ghost to run tests and check the health of Ghost
// Uses a different port number
testing: {
url: 'http://127.0.0.1:2369',
database: {
client: 'sqlite3',
connection: {
filename: path.join(__dirname, '/content/data/ghost-test.db')
}
},
server: {
host: '127.0.0.1',
port: '2369'
},
logging: false
},
// ### Testing MySQL
// Used by Travis - Automated testing run through GitHub
'testing-mysql': {
url: 'http://127.0.0.1:2369',
database: {
client: 'mysql',
connection: {
host : '127.0.0.1',
user : 'root',
password : '',
database : 'ghost_testing',
charset : 'utf8'
}
},
server: {
host: '127.0.0.1',
port: '2369'
},
logging: false
},
// ### Testing pg
// Used by Travis - Automated testing run through GitHub
'testing-pg': {
url: 'http://127.0.0.1:2369',
database: {
client: 'pg',
connection: {
host : '127.0.0.1',
user : 'postgres',
password : '',
database : 'ghost_testing',
charset : 'utf8'
}
},
server: {
host: '127.0.0.1',
port: '2369'
},
logging: false
}
};
module.exports = config;
So basically, when I go to http://localhost:9000/blog, it isn't being rendered at all. Nothing. I was using Chrome and also testing it using Safari. Also tested those two without JavaScript turned on.
And then I try to do curl http://localhost:9000/blog, and try using a requester app (like Postman) and they returned the correct html string. I also tried to do a curl using the user agent as Chrome and as Safari, it also returns the correct html.
I traced down to ghost node_modules, and the renderer is in ghost > core > server > controllers > frontend > index.js in this line res.render(view, result)
I changed the res.render to be like this
res.render(view, result, function(err, string) {
console.log("ERR", err);
console.log("String", string);
res.send(string);
})
and there is no error, it logs the current string, but it doesn't render anything on the browser.
I tried curl, postman, works, but browser doesn't work.
then I tried to send a hello world string, it works, the browser rendered it.
Then I add the string length one by one, and it turns out, any str.length < 1023 will be able to be rendered by the browser, but once it get past that, it doesn't.
And I tried in my parent Express app, it is able to send string which length is more than 1023, and if I use the ghost module as a standalone, it also able to send string more than 1023.
So something must have happened between those two, but I don't know how to debug this.
Please help
I have an older project that was set up about a year ago using a yeoman generator. It has been working fine for livereload but now when I want to upgrade my node dependencies my current configuration does not work anymore. I have tried to find examples on how it should be but I cannot find any good examples.
Here is my current configuration, what do I need to change to get it working with the latest version of grunt-contrib-connect. The error message I get is:
Running "connect:livereload" (connect) task
Warning: connect.static is not a function Use --force to continue.
Also, do you have any tips on good tutorials to get a better understanding of how this all fits together?
connect: {
options: {
port: 9009,
hostname: 'localhost',
livereload: 35729
},
proxies: [
{
context: '/api',
host: 'localhost',
port: 61215,
https: false,
xforward: false,
rewrite: {
'^/api': '/app/api'
}
}
],
livereload: {
options: {
open: false,
base: [
'.tmp',
'<%= yeoman.app %>',
],
middleware: function (connect, options) {
if (!Array.isArray(options.base)) {
options.base = [options.base];
}
var middlewares = [
connect.static('.tmp'),
connect().use(
'/modules',
connect.static('./modules')
),
connect().use(
'/node_modules',
connect.static('./node_modules')
),
connect.static(appConfig.app),
require('grunt-connect-proxy/lib/utils').proxyRequest
];
// Make directory browse-able.
var directory = options.directory || options.base[options.base.length - 1];
middlewares.push(connect.directory(directory));
return middlewares;
}
}
},
The new grunt-contrib-connect doesn't support connect.static. You'll need to install serve-static and use serveStatic instead of connect.static Warning: connect.static is not a function Use --force to continue
I followed this tutorial:
http://www.gilluminate.com/2014/06/10/livereload-ssl-https-grunt-watch/
My Gruntfile.js looks like this:
module.exports = function(grunt) {
grunt.initConfig({
jshint: {
files: ['Gruntfile.js', 'src/**/*.js', 'test/**/*.js'],
options: {
globals: {
jQuery: true
}
}
},
watch: {
css: {
files: '**/*.sass',
tasks: ['sass'],
options: {
livereload: {
port: 9000,
key: grunt.file.read('ssl/livereload.key'),
cert: grunt.file.read('ssl/livereload.crt')
// you can pass in any other options you'd like to the https server, as listed here: http://nodejs.org/api/tls.html#tls_tls_createserver_options_secureconnectionlistener
}
}
}
}
});
grunt.loadNpmTasks('grunt-contrib-jshint');
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.registerTask('default', ['jshint']);
};
I added
<script src="//localhost:9000/livereload.js"></script>
at the end of my index.html file.
My goal is to run livereload over httpS: port 9000.
Maybe am I missing some part, like "run Grunt to make it load to the grunt file"?
The error I get is:
GET https://localhost:9000/livereload.js net::ERR_CONNECTION_REFUSED
Grunt-contrib-watch uses tiny-lr#^0.2.1 which doesn't support SSL, someone needs to make a pull request to upgrade tiny-lr.
So, I have the grunt file below. I'm wanting to add a task that will start my node app and watch for changes in a directory and restart. I have been using supervisor, node-dev (which are great) but I want to run one command and start my whole app. There has got to be a simple way to do this, but I'm just missing it. It is written in coffeescript as well (not sure if that changes things)...
module.exports = function(grunt) {
grunt.initConfig({
/*exec: {
startApi: {
command: "npm run-script start-api"
}
},*/
//static server
server: {
port: 3333,
base: './public',
keepalive: true
},
// Coffee to JS compilation
coffee: {
compile: {
files: {
'./public/js/*.js': './src/client/app/**/*.coffee'
},
options: {
//basePath: 'app/scripts'
}
}
},
mochaTest: {
all: ['test/**/*.*']
},
watch: {
coffee: {
files: './src/client/app/**/*.coffee',
tasks: 'coffee'
},
mochaTest: {
files: 'test/**/*.*',
tasks: 'mochaTest'
}
}
});
grunt.loadNpmTasks('grunt-contrib-coffee');
grunt.loadNpmTasks('grunt-mocha-test');
//grunt.loadNpmTasks('grunt-exec');
grunt.registerTask( 'default', 'server coffee mochaTest watch' );
};
As you can see in the comments, I tries grunt-exec, but the node command stops the execution of the other tasks.
You can set grunt to run default task and the watch task when you start your node app:
in app.js
var cp = require('child_process');
var grunt = cp.spawn('grunt', ['--force', 'default', 'watch'])
grunt.stdout.on('data', function(data) {
// relay output to console
console.log("%s", data)
});
Then just run node app as normal!
Credit