I seen tons of questions with responses on how to periodically run a function. The problem? They all lock the program from moving on.
What I need?
I have a websocket and I have a for {} that keeps reading for msg's and now and every 10 secs I need to run a call to the heartbeat function, with out interrupting the program or stop reading the chat.
I am thinking in using some approaches but none of them seem clean/nice using channels, but I think someone more experienced may have a good and simpler approach to this.
I was using something like this but it never gets to the end of the function to return what I need for the program to continue.
timerCh := time.Tick(time.Duration(10) * time.Second)
for range timerCh {
go Heartbeat(ws)
}
return ws
I am looking for a way to be able to call Heartbeat every 10 secs with out:
calling it repetitively every time I read for incoming websocket connections
with out locking the program.
Some context,
This is for a bridge chat, so I open a websocket, keep reading from this, and only sent to the other chat when is a chat msg, in the mean time I need to sent a heartbeat with out locking this.
What do I have now?
now it all works, but after 30 secs or so my websocket closes:
2020/07/01 20:59:09 Error read: websocket: close 1000 (normal)
EDIT: Go with #JimB's answer! This was just meant to show that the timer should be managed outside of the main goroutine.
Your heartbeat function should manage its own timer. Something like this should work:
package main
import (
"fmt"
"time"
)
func main() {
go heartbeat()
for i := 0; i < 10; i++ {
fmt.Println("main is still going")
time.Sleep(time.Second * 3)
}
}
func heartbeat() {
for {
timer := time.After(time.Second * 10)
<-timer
fmt.Println("heartbeat happened!")
}
}
This example will print from main every 3 seconds and from heartbeat every 10 and will terminate after 30 seconds.
You will want to be able to shut down the ticker, so do not use time.Tick, rather use time.NewTicker.
If you move the entire for loop with the ticker into a goroutine, it will not block the main goroutine.
go func() {
ticker := time.NewTicker(10 * time.Second)
defer ticker.Stop()
for range ticker.C {
Heartbeat(ws)
// return on error or cancellation
}
}()
Related
I have a CAPL test code that controls the start of CAN signal sending. My goal is to delay the start of the sending process.
My idea to do this is via a setTimer() function in combination with isTimerActive().
In general my code looks the following:
main() {
CANstart();
function_2();
function_3();
}
CANstart() {
SetTimer(Delay, 5000); //Timer initialization, set to be 5000ms
while (isTimerActive()==1) {
// this while loop avoids that the code is proceding while the settimer exception is being called and executed
}
StartCANTransmitting(); // After this function, jump back to main and proceed with function_2
}
on timer Delay {
// Do nothing, just wait
}
The program code above lead to being stuck at that point, CANoe does not response and the only way I can end the simulation is via taskmanager.
Further examination from my side lead to the conclusion that the timer need more time to process and is not executed at all.
Without the isTimerActive() function, the program code does not wait for the timer to finish and there is no delay at all. Seems like the code runs through without waiting for the exception.
Seems like CAPL handles loops very bad.
I check out stackoverflow and the following forum posts talk about very similar issues that I have without offering any working solutions:
CAPL Programming usage of Timer as a delay
Are timers running, while loops are active?
Delay function in CAPL apart from testwaitfortimeout()
I see a great deal of issues with your code. It actually does not feel like code at all, but more like pseudo-code. Does it compile on your CAPL browser?
main() {
CANstart();
function_2();
function_3();
}
If this is a function declaration, then it is missing both a type and a return value. In addition, when are you expecting main() to be executed?
The same applies to:
CANstart()
Let us make a step back. You need to delay the beginning of can transmitting. If you need to do so because you have code outside CANalyzer/CANoe running, then I suggest you call the application via command line (refer to the guide for more help).
If you need, however, to have blocks running in your setup configuration, like a Replay block, a Loggin block or whatever, I suggest you to do the following:
variables {
/* define your variables here. You need to define all messages you want to send and respective signal values if not defaulted */
message 0x12345678 msg1; // refer to CAPL guide on how to define message type variables
msTimer delay;
msTimer msgClock1;
}
on start {
/* when you hit the start measurements button (default F9) */
setTimer(delay, 5000); // also note your syntax is wrong in the example
}
on timer delay {
/* when timer expires, start sending messages */
output(msg1); // send your message
setTimer(msgClock1,250); // set timer for cyclic message sending
}
on timer msgClock1 {
/* this mimicks the behaviour of a IG block */
setTimer(msgClock1,250); // keep sending message
output(msg1)
}
Does this achieve your goal? Please feel free to ask for more details.
It appears that you have a problem with the while (isTimerActive()==1) { statement.
CAPL function int isTimerActive requires the parameters timer or mstimer variable and return values
1, if the timer is active otherwise 0.
You can check if the timer is active and the time to elapse in the following way.
timer t;
write("Active? %d", isTimerActive(t)); // writes 0
setTimer(t, 5);
write("Active? %d", isTimerActive(t)); // writes 1
write("Time to elapse: %d",timeToElapse(t)); // Writes 5
try adding the parameter timer at while (isTimerActive(Delay)==1) {
I would not suggest using the while statement instead you can use the timer directly to call the function StartCANTransmitting() and your Main() should be MainTest()
void MainTest()
{
TestModuleTitle("Sample Tests");
TestModuleDescription("This test module calls some test cases to demonstrate ");
CANstart();
if (TestGetVerdictLastTestCase() == 1)
Write("CANstart failed.");
else
Write("CANstart passed.");
}
testcase CANstart() {
// add info block to test case in report
TestReportAddMiscInfoBlock("Used Test Parameters");
TestReportAddMiscInfo("Max. voltage", "19.5 V");
TestReportAddMiscInfo("Max. current", "560 mA");
TestReportAddMiscInfo("StartCANTransmitting");
SetTimer(Delay, 5000); //Timer initialization, set to be 5000ms
}
on timer Delay {
StartCANTransmitting();
}
I'm newbie in programming and I need help. Trying to write gitlab scraper on golang.
Something goes wrong when i'm trying to get information about projects in multithreading mode.
Here is the code:
func (g *Gitlab) getAPIResponce(url string, structure interface{}) error {
responce, responce_error := http.Get(url)
if responce_error != nil {
return responce_error
}
ret, _ := ioutil.ReadAll(responce.Body)
if string(ret) != "[]" {
err := json.Unmarshal(ret, structure)
return err
}
return errors.New(error_emptypage)
}
...
func (g *Gitlab) GetProjects() {
projects_chan := make(chan Project, g.LatestProjectID)
var waitGroup sync.WaitGroup
queue := make(chan struct{}, 50)
for i := g.LatestProjectID; i > 0; i-- {
url := g.BaseURL + projects_url + "/" + strconv.Itoa(i) + g.Token
waitGroup.Add(1)
go func(url string, channel chan Project) {
queue <- struct{}{}
defer waitGroup.Done()
var oneProject Project
err := g.getAPIResponce(url, &oneProject)
if err != nil {
fmt.Println(err.Error())
}
fmt.Printf(".")
channel <- oneProject
<-queue
}(url, projects_chan)
}
go func() {
waitGroup.Wait()
close(projects_chan)
}()
for project := range projects_chan {
if project.ID != 0 {
g.Projects = append(g.Projects, project)
}
}
}
And here is the output:
$ ./gitlab-auditor
latest project = 1532
Gathering projects...
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................Get https://gitlab.example.com/api/v4/projects/563&private_token=SeCrEt_ToKeN: unexpected EOF
Get https://gitlab.example.com/api/v4/projects/558&private_token=SeCrEt_ToKeN: unexpected EOF
..Get https://gitlab.example.com/api/v4/projects/531&private_token=SeCrEt_ToKeN: unexpected EOF
Get https://gitlab.example.com/api/v4/projects/571&private_token=SeCrEt_ToKeN: unexpected EOF
.Get https://gitlab.example.com/api/v4/projects/570&private_token=SeCrEt_ToKeN: unexpected EOF
..Get https://gitlab.example.com/api/v4/projects/467&private_token=SeCrEt_ToKeN: unexpected EOF
Get https://gitlab.example.com/api/v4/projects/573&private_token=SeCrEt_ToKeN: unexpected EOF
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Every time it's different projects, but it's id is around 550.
When I'm trying to curl links from output, i'm getting normal JSON. When I'm trying to run this code with queue := make(chan struct{}, 1) (in single thread) - everything is fine.
What can it be?
i would say this not a very clear way to achieve concurrency.
what seems to be happening here is
you create a buffered channel that has a size of 50.
then you fire up 1532 goroutines
the first 50 of them enqueue themselves and start processing. by the time they <-queue and free up somespace a random one from the next manages to get on the queue.
as people say in the comments most certainly you hit some limits around the time it the blast has made it around id 550. Then gitlab's API is angry at you and rate limits.
then another goroutine is fired that will close the channel to notify the main goroutine
the main goroutine reads messages.
the talk go concurrency patterns
as well as this blog post concurrency in go might help.
personally i rarely use buffered channels. for your problem i would go like:
define a number of workers
have the main goroutine fire up the workers with a func listening on a channel of ints , doing the api call, writing to a channel of projects
have the main goroutine send to a channel of ints the project number to be fetched and read from the channel of projects.
maybe ratelimit by firing a ticker and have main read from it before it sends the next request?
main closes the number channel to notify the others to die.
I am creating a simple http server using golang. I have two questions, one is more theoretic and another one about the real program.
Concurrent request handling
I create a server and use s.ListenAndServe() to handle the requests.
As much as I understand the requests served concurrently. I use a simple handler to check it:
func ServeHTTP(rw http.ResponseWriter, request *http.Request) {
fmt.Println("1")
time.Sleep(1 * time.Second) //Phase 2 delete this line
fmt.Fprintln(rw, "Hello, world.")
fmt.Println("2")
}
I see that if I send several requests, I will see all the "1" appear and only after a second all the "2" appear.
But if I delete the Sleep line, I see that program never start a request before it finishes with the previous one (the output is 1 2 1 2 1 2 ...).
So I don't understand, if they are concurrent or not really. If they are I would expect to see some mess in prints...
The real life problem
In the real handler, I send the request to another server and return the answer to the user (with some changes to request and the answer but in idea it is kind of a proxy). All this of course takes time and from what can see (by adding some prints to the handler), the requests are handled one by one, with no concurrency between them (my prints show me that a request starts, go through all the steps, ends and only then I see a new start....).
What can I do to make them really concurrent?
Putting the handler function as goroutine gives an error, that body of the request is already closed. Also if it is already concurrent adding more goroutines will make things only worse.
Thank you!
Your example makes it very hard to tell what is happening.
The below example will clearly illustrate that the requests are run in parallel.
package main
import (
"fmt"
"log"
"net/http"
"time"
)
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
if len(r.FormValue("case-two")) > 0 {
fmt.Println("case two")
} else {
fmt.Println("case one start")
time.Sleep(time.Second * 5)
fmt.Println("case one end")
}
})
if err := http.ListenAndServe(":8000", nil); err != nil {
log.Fatal(err)
}
}
Make one request to http://localhost:8000
Make another request to http://localhost:8000?case-two=true within 5 seconds
console output will be
case one start
case two
case one end
It does serve requests in a concurrent fashion as can be seen here in the source https://golang.org/src/net/http/server.go#L2293.
Here is a contrived example:
package main
import (
"fmt"
"log"
"net/http"
"sync"
"time"
)
func main() {
go startServer()
sendRequest := func() {
resp, _ := http.Get("http://localhost:8000/")
defer resp.Body.Close()
}
start := time.Now()
var wg sync.WaitGroup
ch := make(chan int, 10)
for i := 0; i < 10; i++ {
wg.Add(1)
go func(n int) {
defer wg.Done()
sendRequest()
ch <- n
}(i)
}
go func() {
wg.Wait()
close(ch)
}()
fmt.Printf("completion sequence :")
for routineNumber := range ch {
fmt.Printf("%d ", routineNumber)
}
fmt.Println()
fmt.Println("time:", time.Since(start))
}
func startServer() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
time.Sleep(1 * time.Second)
})
if err := http.ListenAndServe(":8000", nil); err != nil {
log.Fatal(err)
}
}
Over several runs it is easy to visualize that the completion ordering of the go routines which send the requests are completely random and given the fact that channels are fifo, we can summarize that the server handled the requests in a concurrent fashion, irrespective of whether HandleFunc sleeps or not.
(Assumption being all the requests start at about the same time).
In addition to the above, if you did sleep for a second in HandleFunc, the time it takes to complete all 10 routines is consistently 1.xxx seconds which further shows that the server handled the requests concurrently as else the total time to complete all requests should have been 10+ seconds.
Example:
completion sequence :3 0 6 2 9 4 5 1 7 8
time: 1.002279359s
completion sequence :7 2 3 0 6 4 1 9 5 8
time: 1.001573873s
completion sequence :6 1 0 8 5 4 2 7 9 3
time: 1.002026465s
Analyzing concurrency by printing without synchronization is almost always indeterminate.
While Go serves requests concurrently, the client might actually be blocking (wait for the first request to complete before sending the second), and then one will see exactly the behavior reported initially.
I was having the same problem, and my client code was sending a "GET" request via XMLHttpRequest to a slow handler (I used a similar handler code as posted above, with 10 sec timeout). In turns out that such requests block each other. Example JavaScript client code:
for (var i = 0; i < 3; i++) {
var xhr = new XMLHttpRequest();
xhr.open("GET", "/slowhandler");
xhr.send();
}
Note that xhr.send() will return immediately, since this is an asynchronous call, but that doesn't guarantee that the browser will send the actual "GET" request right away.
GET requests are subject to caching, and if one tries to GET the same URL, caching might (in fact, will) affect how the requests to the server are made. POST requests are not cached, so if one changes "GET" to "POST" in the example above, the Go server code will show that the /slowhandler will be fired concurrently (you will see "1 1 1 [...pause...] 2 2 2" printed).
I am currently studying, and I miss setTimeout from Nodejs in golang. I haven't read much yet, and I'm wondering if I could implement the same in go like an interval or a loopback.
Is there a way that I can write this from node to golang? I heard golang handles concurrency very well, and this might be some goroutines or else?
//Nodejs
function main() {
//Do something
setTimeout(main, 3000)
console.log('Server is listening to 1337')
}
Thank you in advance!
//Go version
func main() {
for t := range time.Tick(3*time.Second) {
fmt.Printf("working %s \n", t)
}
//basically this will not execute..
fmt.Printf("will be called 1st")
}
The closest equivalent is the time.AfterFunc function:
import "time"
...
time.AfterFunc(3*time.Second, somefunction)
This will spawn a new goroutine and run the given function after the specified amount of time. There are other related functions in the package that may be of use:
time.After: this version will return a channel that will send a value after the given amount of time. This can be useful in combination with the select statement if you want a timeout while waiting on one or more channels.
time.Sleep: this version will simply block until the timer expires. In Go it is more common to write synchronous code and rely on the scheduler to switch to other goroutines, so sometimes simply blocking is the best solution.
There is also the time.Timer and time.Ticker types that can be used for less trivial cases where you may need to cancel the timer.
This website provides an interesting example and explanation of timeouts involving channels and the select function.
// _Timeouts_ are important for programs that connect to
// external resources or that otherwise need to bound
// execution time. Implementing timeouts in Go is easy and
// elegant thanks to channels and `select`.
package main
import "time"
import "fmt"
func main() {
// For our example, suppose we're executing an external
// call that returns its result on a channel `c1`
// after 2s.
c1 := make(chan string, 1)
go func() {
time.Sleep(2 * time.Second)
c1 <- "result 1"
}()
// Here's the `select` implementing a timeout.
// `res := <-c1` awaits the result and `<-Time.After`
// awaits a value to be sent after the timeout of
// 1s. Since `select` proceeds with the first
// receive that's ready, we'll take the timeout case
// if the operation takes more than the allowed 1s.
select {
case res := <-c1:
fmt.Println(res)
case <-time.After(1 * time.Second):
fmt.Println("timeout 1")
}
// If we allow a longer timeout of 3s, then the receive
// from `c2` will succeed and we'll print the result.
c2 := make(chan string, 1)
go func() {
time.Sleep(2 * time.Second)
c2 <- "result 2"
}()
select {
case res := <-c2:
fmt.Println(res)
case <-time.After(3 * time.Second):
fmt.Println("timeout 2")
}
}
You can also run it on the Go Playground
another solution could be to implement an
Immediately-Invoked Function Expression (IIFE) function like:
go func() {
time.Sleep(time.Second * 3)
// your code here
}()
you can do this by using sleep function
and give your duration you need
package main
import (
"fmt"
"time"
)
func main() {
fmt.Println("First")
time.Sleep(5 * time.Second)
fmt.Println("second")
}
I am trying to detect deadlocks in MPI
is there any method in which we can jump from function like MPI_Recv after particular time.
MPI_Recv is a blocking function and will just sit there untill it receives the data it is waiting for, so if you are looking to have it timeout and error if things lock up then I don't think that's the one for you.
You could look into using MPI_Irecv, which is the non-blocking version. You could then emulate the blocking behaviour of MPI_Recv using MPI_Wait or MPI_Test.
If you use a combination of MPI_Irecv and MPI_Test you could make a snippet that waits to recieve for a specified length of time, then errors if it hasn't. Rough example:
MPI_Irecv(..., &request); //start a receive request, non-blocking
time_t start_time = time(); //get start time
MPI_Test(&request, &gotData, ...); //test, have we got it yet
//loop until we have received, or taken too long
while (!gotData && difftime(time(),start_time) < TIMEOUT_TIME) {
//wait a bit.
MPI_Test(&request, &gotData, ...); //test again
}
//By now we either have received the data, or taken too long, so...
if (!gotData) {
//we must have timed out
MPI_Cancel(&request);
MPI_Request_free(&request);
//throw an error
}