Google structured data - structured-data

Before I ask I would like to mention that I have searched for solution...
I am trying to build Goggle's AMP page. I used their template and tested it with Chrome's Developer Tools, however, in Google's own Structured Data Testing Tool I get an error and two warnings. I'm stuck trying to figure this thing out. Here's my code and below is a screen capture of the errors I see.
<script type="application/ld+json">
{
"#context": "http://schema.org",
"#type": "NewsArticle",
"headline": "Remote Card Sorting and Prioritization Matrix Tools for usability testing and information architecture.",
"datePublished": "2016-10-08T12:02:41Z",
"dateModified": "2016-11-05T12:02:41Z",
"author": "usabiliTEST",
"image": [
"/i/usabilitestLogo.png"
],
"publisher": "usabiliTEST"
}
</script>
How can this be fixed? What am I still missing?

Image needs to be an 'Image Object'.
"image": {
"#type": "ImageObject",
"url": "https://google.com/thumbnail1.jpg",
"height": 800,
"width": 800
},
I answered a similar question here.

Related

Is it possible to derive Web Vitals from chrome trace events data?

I am hoping to get some advice with regards to calculating core web vitals without interacting with PerformanceObserver api but instead to use Chrome trace events.
Since the puppeteer operation is done at scale I prefer to not to interact with the page using page.evaluate but instead calculate the metrics if possible from the data I get using:
await page.tracing.start();
await page.goto(url, { waitUntil: "networkidle0" });
writeFile("dump.json", (await page.tracing.stop()).toString()); -> Can I get web vitals/perf metrics here?
I can see in the output of the trace, events like this:
{
"args": {
"data": {
"candidateIndex": 1,
"isMainFrame": true,
"navigationId": "927C29B0D3BD3783D85D54D161903DEF",
"nodeId": 92,
"size": 145574,
"type": "image"
},
"frame": "8DCDB2C2311B3C524C094C8C4555E0FB"
},
"cat": "loading,rail,devtools.timeline",
"name": "largestContentfulPaint::Candidate",
"ph": "I",
"pid": 94113,
"s": "t",
"tid": 775,
"ts": 437767356341
}
Would be good to know if I am barking up the wrong tree here or I can extract/calculate perf metrics using only the traceEvents data. I am assuming this should be possible since the dump has all the events that took place during page load and the devtools seems to place them metrics on the timeline too.
Many Thanks.
The PerformanceTimeline domain used by the Chrome DevTools protocol may contain the kind of information you're looking for, similar to your screenshot.
The FCP, LCP, and CLS vitals are also recorded in the trace data and accessible via Puppeteer, but there are some caveats to consider:
The correct trace categories should be recorded. Refer to the categories used by DevTools.
The render and frame IDs should be used to disambiguate records between the top-level frame and any iframes. You can get these IDs from the TracingStartedInBrowser event.

Problems playing, pausing & resuming on Google Assistant (Actions on Google) with live streaming Audio/MP3s using Actions Builder?

This is my first post on StackOverflow (long-time lurker, first-time poster), so go easy on me. ^__^;
For those having trouble in implementing play/pause/resume functionality with a STATIC mp3 I’m assuming the process is the same, so hopefully, this post will help you guys as well.
I’m working on building a live mp3 streaming Google Action, and I seem to be having issues with implementing it in the new Actions Console https://console.actions.google.com/
According to the Google Actions documentation found here:
https://developers.google.com/assistant/conversational/prompts-media - Last updated 2021-03-10 UTC.
I should be able to invoke a Media Response to play an mp3 back to the user using the YAML / JSON example provided in the above link, however, it seems that playing, pausing, and resuming doesn’t work correctly with a streaming mp3 URL.
TLDR; Here's a shorter version of the write up:
https://imgur.com/a/FIgOsl8
For a more detailed analysis see below:
STEPS TO REPRODUCE
Starting with the example provided in the documentation and popping the JSON version sample code (posted here for convenience) in the On Enter section of the scene; I was able to play the media fine.
{
"candidates": [
{
"first_simple": {
"variants": [
{
"speech": "This is a media response."
}
]
},
"content": {
"media": {
"optional_media_controls": [
"PAUSED",
"STOPPED"
],
"media_objects": [
{
"name": "Media name",
"description": "Media description",
"url": "https://storage.googleapis.com/automotive-media/Jazz_In_Paris.mp3",
"image": {
"large": {
"url": "https://storage.googleapis.com/automotive-media/album_art.jpg",
"alt": "Jazz in Paris album art"
}
}
}
],
"media_type": "AUDIO"
}
}
}
]
}
Note: In the above JSON I removed the start_offset node because it’s currently not supported by iOS and is probably put in there as an example for testing purposes.
Here’s an example of the static mp3 media response playing for reference:
https://downloaddave.com/reviews/clients/momentum-br/ga-sr/Screenshot_streaming_playing_no_error_with_test_mp3.png
I noticed that pausing and resuming the static mp3 does not work unless you enabled the following System Intents:
MEDIA_STATUS_PAUSED
MEDIA_STATUS_STOPPED
MEDIA_STATUS_FAILED
MEDIA_STATUS_FINISHED
Otherwise, if you click on the “pause” icon on the Media Response Player or invoke the pause earcon (earcon = ear + icon) you will encounter the following errors:
Sorry, [Your Action’s Display Name] isn't responding right now. Please try again soon.
Did not find any handling for intent event 'actions.intent.MEDIA_STATUS_PAUSED' on scene 'playStreamingAudio'
{
"endConversation": {}
}
Under the Error and status handling section of the scene I added the system intents as seen in the following screenshot.
https://downloaddave.com/reviews/clients/momentum-br/ga-sr/playStreamingAudio_Scene_Configuration_000.png
Note that if I just transition the MEDIA_STATUS_PAUSED to “No Transition” it gives me an error message, Event handler for ‘playStreamingAudio’ has an empty function call and/or empty transition.
If it goes to “End Conversation” it ends the test and exits out of the Media Response Card rather than giving me the option to resume (which seems like a bad user/conversational flow and probably won't pass review).
Tapping the “pause” icon, typing, or saying “pause” doesn’t work unless the MEDIA_STATUS_PAUSED transitions to another Scene which I’ve called pauseStreamingAudio.
In the pauseStreamingAudio scene, I added a prompt letting the user know they can say “play” or “cancel” along with suggestions indicating the same.
{
"candidates": [
{
"first_simple": {
"variants": [
{
"speech": "You can say play to resume audio or cancel to quit."
}
]
},
"suggestions": [{
"title": "Play"
}, {
"title": "Cancel"
}]
}
]
}
From the pauseStreamingAudio Scene, I added a custom intent “play” to go back to the previous Scene I’ve called playSreamingAudio.
I’m not sure if I’m doing this right BUT IT WORKS!
Streaming mp3
Now that I got the foundation working I swapped out the static mp3 to the streaming audio. Here is the Sample Code JSON Builder with streaming mp3 link & “start_offset” removed and the streaming mp3 link.
{
"candidates": [
{
"first_simple": {
"variants": [
{
"speech": "This is a media response."
}
]
},
"content": {
"media": {
"optional_media_controls": [
"PAUSED",
"STOPPED"
],
"media_objects": [
{
"name": "Media name",
"description": "Media description",
"url": "https://prod-35-230-37-193.wostreaming.net/momentum-kvmifmaac-ibc2",
"image": {
"large": {
"url": "https://storage.googleapis.com/automotive-media/album_art.jpg",
"alt": "Jazz in Paris album art"
}
}
}
],
"media_type": "AUDIO"
}
}
}
]
}
The Content-Type of the streaming file that I’m testing with doesn't specifically end in a *.mp3 and when I check the content type is reads as audio/aacp.
Codec: ADTS
Type: Audio
Channels: Stereo
Sample Rate: 44100 Hz
Bits per Sample: 32
AAC Extension: SBR+PS
This works as and I'm able to stream audio form the source file. See screenshot below.
https://downloaddave.com/reviews/clients/momentum-br/ga-sr/Smart_Display_Time_Error_Infinit_NaN_Nan_redlined.png
However, there is a display error on the Media Response Player at the time index by the bottom right Infinity:NaN:NaN (highlighted in the red box).
Likely related, I can no longer trigger the Pause System Intent anymore. Instead, I get the following error:
https://downloaddave.com/reviews/clients/momentum-br/ga-sr/Screenshot_streaming_pause_error.png
Notice that the drop-down is open and there is no response for me to use and troubleshoot.
I also tried looking through the Actions on Google documentation to see if there could be something wrong with the audio stream I was providing, the best thing I could find was,
“Audio for playback must be in a correctly formatted MP3 file. MP3 files must be hosted on a web server and be publicly available through an HTTPS URL. Live streaming is only supported for the MP3 format.”
I found some info on mp3 specs on the SSML page here, but I’m not sure if this applies to the Media Response https://developers.google.com/assistant/conversational/ssml#audio - Last updated 2021-05-25 UTC.
Does anyone have any ideas on how I can get this working or even troubleshoot this?
Could some of these circumstances be an issue with the Media Player itself? How would one go about fixing this?
Anyway, I hope this helps somebody out there & thanks very much in advance. Any help is most appreciated.

tasks for stack in VS code

I am new to VS code (1.21.1) with HIE 0.1.0.0 - installed using stack. I have been able to define a task for testing:
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "test",
"type": "shell",
"command": "stack build --test",
"group": {
"kind": "build",
"isDefault": true
}
}
]
}
I checked the documentation mentioned but did not understand much.
This task works only on the project I specified it for; how to make it usable for any project I happen to have open in the editor? How to further automate the selection of tasks?
I assume that there is somewhere a collection of tasks to run stack with HIE - could somebody point me to it?
I'm not sure if this is the problem you run into, but Visual Studio Code gives you the ability to edit settings per work-space/project in addition to overall settings:
Make sure that you're editing the USER SETTINGS tab instead of the WORKSPACE SETTINGS tab, if you want the settings to all projects.
Apologies if this is a trivial answer, and the problem is something completely different.

How to configure SublimeLinter 4.0.1

I'm using Sublime Text 3, and SublimeLinter just upgraded itself to version 4.0.1.
I've read the docs but can't work out how to get it working properly. I have tried copying the default settings into user settings and changing them but it has made no difference (this is via Preferences > Package Settings > Sublime Linter > Settings).
The gutter icons are all just white; I've tried setting it to:
"gutter_theme": "Default",
I've also tried changing the icon in styles:
"styles": [
{
"mark_style": "outline",
"priority": 1,
"scope": "region.yellowish markup.changed.sublime_linter markup.warning.sublime_linter",
"icon": "warning",
"types": [
"warning"
]
},
{
"mark_style": "outline",
"priority": 1,
"scope": "region.redish markup.deleted.sublime_linter markup.error.sublime_linter",
"icon": "error",
"types": [
"error"
]
}
],
The outline around error text is now incredibly faint and I can barely see it; previously it was clear and bright.
Anybody know how to configure the new version to highlight errors clearly and show different icons for warnings, errors? Thanks for any help!
(How I wish this hadn't updated itself. The docs say you can manually install an older version but that looks like another project in itself, I just want to get on with some work!)
Edit: more fiddling around and setting "mark_style": "fill", in "styles" has at least made the errors visible on the page. But I can't figure out how to get the gutter icons to be different.
It's not you, there's an open issue on GitHub. See here.

Separating classes in a chrome extension

I'm writing my first chrome extension and I can't seem to find any way to pull in functionality in a RequireJS type way..
I just want to be able to write a class in 1 JS file and pull its functionality in to my background.js file and use it, to keep things tidy.
I've googled about and can't find anything! What am I missing?
Thanks!
1
If you want to pull class.js functionality in to background.js, you can just append it to scripts array in manifest.json.
Like this,
{
"name": "My extension",
...
"background": {
"scripts": ["class.js","background.js"]
},
...
}
This means background.js will load after class.js is loaded. So that you can use all functioanality of class.js in background.js.
2
If you want to do this in require.js way, you should use background.html instead of background.js.
Like this,
manifest.json
{
"background": {
"page": "background.html"
},
"description": "Background example",
"name": "Background example",
"version": "0.0.1",
"manifest_version": 2,
"web_accessible_resources": [
"scripts/require-jquery.js",
"scripts/main.js"
],
}
As you see you must add script files that you will use to web_accesible_resource array.
background.html
<!DOCTYPE html>
<html>
<head>
<title>jQuery+RequireJS Sample Page</title>
<!-- This is a special version of jQuery with RequireJS built-in -->
<script data-main="scripts/main" src="scripts/require-jquery.js"></script>
</head>
<body>
</body>
</html>

Resources