Null Object Reference when making a FlxPath - haxe

I'm trying to tweak this HaxeFlixel tutorial in order to make it more interesting and learn how to make games in the process. One of the first things I'm doing is restricting movement to tiles.
However, I've come to a wall – none of what I've tried is working. I've replaced the content of the movement() function in the Player class (which, for reference, is called every update()) with this:
if (FlxG.mouse.justReleased) {
var _path:FlxPath;
_path = new FlxPath(this, Reg.mWalls.findPath(getGraphicMidpoint(), FlxG.mouse.getWorldPosition()), speed);
}
(Reg.mWalls is where I moved PlayState._mWalls to make it accessible to this code.)
According to the docs this should create a FlxPath and immediately start it going (as the first argument isn't null), but instead it generates this error:
Null Object Reference
Called from openfl._v2.display.Stage.__pollTimers (openfl/_v2/display/Stage.hx line 1020)
Called from openfl._v2.display.Stage.__checkRender (openfl/_v2/display/Stage.hx line 317)
Called from openfl._v2.display.Stage.__render (openfl/_v2/display/Stage.hx line 1035)
Called from openfl._v2.display.DisplayObjectContainer.__broadcast (openfl/_v2/display/DisplayObjectContainer.hx line 280)
Called from openfl._v2.display.DisplayObject.__broadcast (openfl/_v2/display/DisplayObject.hx line 174)
Called from openfl._v2.display.DisplayObject.__dispatchEvent (openfl/_v2/display/DisplayObject.hx line 195)
Called from openfl._v2.events.EventDispatcher.dispatchEvent (openfl/_v2/events/EventDispatcher.hx line 100)
Called from openfl._v2.events.Listener.dispatchEvent (openfl/_v2/events/EventDispatcher.hx line 270)
Called from flixel.FlxGame.onEnterFrame (flixel/FlxGame.hx line 493)
Called from flixel.FlxGame.step (flixel/FlxGame.hx line 648)
Called from flixel.FlxGame.update (flixel/FlxGame.hx line 700)
Called from flixel.FlxState.tryUpdate (flixel/FlxState.hx line 155)
Called from PlayState.update (PlayState.hx line 125)
Called from flixel.group.FlxTypedGroup.update (flixel/group/FlxTypedGroup.hx line 89)
Called from Player.update (Player.hx line 47)
Called from Player.movement (Player.hx line 59)
Called from flixel.tile.FlxTilemap.findPath (flixel/tile/FlxTilemap.hx line 794)
Called from flixel.tile.FlxTilemap.computePathDistance (flixel/tile/FlxTilemap.hx line 1802)
Called from *._Function_1_1 (openfl/_v2/display/Stage.hx line 124)
AL lib: (EE) alc_cleanup: 1 device not closed

I ended up fixing this problem myself.
It turns out that I hadn't set any collision data for tile 0 (the blank tile) and that my map was full of spots that had no tile at all since they were 'outside'.
The fix was to flood-fill the empty area with tile 0 and add this to PlayState.create():
Reg.mWalls.setTileProperties(0, FlxObject.ANY);

Related

Encountered an internal AutoML error- ClientException: Message: No objects to concatenate

I am trying to implement Hierarchical time series forecasting on azureautoml pipelines.
I followed this notebook for implementation
https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb
While I ran training pipeline on compute instance it worked, but when I am running the same on compute cluster it breaks at hts-proportion-calculation part.
This is the error I am getting,
system error:
Encountered an internal AutoML error. Error Message/Code: ClientException. Additional Info: ClientException:
      Message: No objects to concatenate
      InnerException: None
      ErrorResponse
{
"error": {
"message": "No objects to concatenate"
}
}
logs :
Loading arguments for scenario proportions-calculation
adding argument --input-medatadata
adding argument --hts-graph
adding argument --enable-event-logger
Input arguments dict is {'--input-medatadata': '/mnt/azureml/cr/j/85509be625484b6caa3c1d97b7ab2e33/cap/data-capability/wd/INPUT_automl_training_workspaceblobstore/azureml/17ca5ae7-7269-4246-888f-e781071e3f5c/automl_training', '--hts-graph': '/mnt/azureml/cr/j/85509be625484b6caa3c1d97b7ab2e33/cap/data-capability/wd/INPUT_hts_graph_workspaceblobstore/azureml/a2c1b15a-c895-41e8-b6a6-1ca37ebe9e77/hts_graph', '--enable-event-logger': None}
Unknown file to proceed outputs.txt
processing: outputs.txt with type None.
Cleaning up all outstanding Run operations, waiting 300.0 seconds
3 items cleaning up...
Cleanup took 0.001676321029663086 seconds
Traceback (most recent call last):
File "proportions_calculation_wrapper.py", line 47, in <module>
runtime_wrapper.run()
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_many_models/automl_pipeline_step_wrapper.py", line 63, in run
self._run()
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 44, in _run
proportions_calculation(self.arguments_dict, self.event_logger, script_run=self.step_run)
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 173, in proportions_calculation
proportion_files_list, forecasting_parameters.time_column_name, graph.label_column_name
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 92, in calculate_time_agg_sum_for_all_files
df = pd.concat(pool.map(concat_func, files_batches), ignore_index=True)
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/util/_decorators.py", line 311, in wrapper
return func(*args, **kwargs)
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/core/reshape/concat.py", line 304, in concat
sort=sort,
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/core/reshape/concat.py", line 351, in __init__
raise ValueError("No objects to concatenate")
ValueError: No objects to concatenate
Please let me know how can I resolve this issue ?
This error was incurred as Iteration timeout was not less than experiment timeout , but the system error & logs are a kind of misleading.
df = pd.concat(pool.map(concat_func, files_batches), ignore_index=True)
logs was pointing to pandas "No objects to concatenate"
This error can be overcome by setting iterationtimeout value less than experimenttime out value.
I had set iteration_timeout_minutes=60 which caused the error.
automl_settings = AutoMLConfig(
task="forecasting",
primary_metric="normalized_root_mean_squared_error",
experiment_timeout_hours=1,
label_column_name=label_column_name,
track_child_runs=False,
forecasting_parameters=forecasting_parameters,
pipeline_fetch_max_batch_size=15,
model_explainability=model_explainability,
n_cross_validations="auto", # Feel free to set to a small integer (>=2) if runtime is an issue.
cv_step_size="auto",
# The following settings are specific to this sample and should be adjusted according to your own needs.
iteration_timeout_minutes=10,
iterations=15,
)
We are able to run the sample successfully using the compute cluster as given below.
from azureml.core.compute import ComputeTarget, AmlCompute
# Name your cluster
compute_name = "hts-compute"
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print("Found compute target: " + compute_name)
else:
print("Creating a new compute target...")
provisioning_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_D16S_V3", max_nodes=20
)
# Create the compute target
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min node count is provided it will use the scale settings for the cluster
compute_target.wait_for_completion(
show_output=True, min_node_count=None, timeout_in_minutes=20
)
# For a more detailed view of current cluster status, use the 'status' property
print(compute_target.status.serialize())

Is OpenCV running two instances of SIFT detectAndCompute concurrently?

I can get SIFT keypoints and descriptors from two, seperate, large images (~2GB) when I run sift.detectAndCompute from the command line. I run it on one image, wait a very long time, but eventually get the keypoints and descriptors. Then I repeat for the second image, and again it takes a long time, but I DO eventually get my keypoints and descriptors. Here are the two lines I run from the IPython console in Spyder, which I am running on my machine with 32 GB of RAM. (MAX_MATCHES = 50000 in the code below):
sift = cv2.xfeatures2d.SIFT_create(MAX_MATCHES)
keypoints, descriptors = sift.detectAndCompute(imgGray, None)
This takes 10 minutes to finish, but it does finish. Next, I run this:
keypoints2, descriptors2 = sift.detectAndCompute(refimgGray, None)
When done, keypoints and keypoints2 DO contain 50000 keypoint objects.
However, if I run my script, which calls a function that uses sift.detectAndCompute and returns keypoints and descriptors, the process takes a long time, uses 100% of my memory and ~95% of my disk BW and then fails with this traceback:
runfile('C:/AV GIS/python scripts/img_align_w_geo_w_mask_refactor_ret_1.py', wdir='C:/AV GIS/python scripts')
Reading reference image : C:\Users\kellett\Downloads\3074_transparent_mosaic_group1.tif
xfrm for image = (584505.1165100001, 0.027370000000000002, 0.0, 4559649.608440001, 0.0, -0.027370000000000002)
Reading image to align : C:\Users\kellett\Downloads\3071_transparent_mosaic_group1.tif
xfrm for image = (584499.92168, 0.02791, 0.0, 4559648.80372, 0.0, -0.02791)
Traceback (most recent call last):
File "<ipython-input-75-571660ddab7f>", line 1, in <module>
runfile('C:/AV GIS/python scripts/img_align_w_geo_w_mask_refactor_ret_1.py', wdir='C:/AV GIS/python scripts')
File "C:\Users\kellett\AppData\Local\Continuum\anaconda3\envs\testgdal\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 668, in runfile
execfile(filename, namespace)
File "C:\Users\kellett\AppData\Local\Continuum\anaconda3\envs\testgdal\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 108, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/AV GIS/python scripts/img_align_w_geo_w_mask_refactor_ret_1.py", line 445, in <module>
matches = find_matches(refKP, refDesc, imgKP, imgDesc)
File "C:/AV GIS/python scripts/img_align_w_geo_w_mask_refactor_ret_1.py", line 301, in find_matches
matches = matcher.match(dsc1, dsc2)
error: C:\ci\opencv_1512688052760\work\modules\core\src\stat.cpp:4024: error: (-215) (type == 0 && dtype == 4) || dtype == 5 in function cv::batchDistance
The function is simply called once for each image thusly:
print("Reading image to align : ", imFilename);
img, imgGray, imgEdgmask, imgXfrm, imgGeoInfo = read_ortho4align(imFilename)
refKP, refDesc = extractKeypoints(refimgGray, refEdgmask)
imgKP, imgDesc = extractKeypoints(imgGray, imgEdgmask)
HERE IS MY QUESTION (sorry for shouting): Do you think Python tries to run the two lines above concurrently in some way? If so, how can I force it to run serially? If not, do you have any idea why the two keypoint detections would work individually, but not when they come one after another in a script?
One more clue - I put in a statement to see if the script proceeds to the second detectAndCompute statement before it fails, and it does. (I just put a print statement in between the two.)
My error was coming later in my script where I was finding matches.
I have no reason to believe the two SIFT keypoint finding processes are occurring at the same time.
I downsampled the images I was searching for SIFT keypoints and was able to iterate my troubleshooting more quickly and found my error.
I will look at my error more closely next time before asking a question.

Sphinx documentation and links to Markdown

I'm trying to use Sphinx to build some documentation from Markdown source. My conf.py is as follows...
conf.py
from recommonmark.parser import CommonMarkParser
project = 'DS'
copyright = '2018, DS'
author = 'DS, Work'
version = ''
release = ''
extensions = []
templates_path = ['_templates']
source_suffix = ['.rst', '.md']
master_doc = 'index'
language = None
exclude_patterns = []
pygments_style = 'sphinx'
html_theme = 'classic'
html_static_path = ['_static']
source_parsers = {
'.md': CommonMarkParser,
}
htmlhelp_basename = 'DSDocumentationdoc'
latex_elements = {
}
latex_documents = [
(master_doc, 'DSDocumentation.tex', 'DS Documentation',
'DS, Work', 'manual'),
]
man_pages = [
(master_doc, 'dsdocumentation', 'DS Documentation',
[author], 1)
]
texinfo_documents = [
(master_doc, 'DSDocumentation', 'DS Documentation',
author, 'DSDocumentation', 'One line description ofproject.',
'Miscellaneous'),
]
index.rst
Welcome to DS Documentation!
======================================
The following documentation is produced and maintained by the Data Science team.
Contents:
.. toctree::
:maxdepth: 2
:glob:
README.md
documentation.md
getting_started/*
how-tos/*
statistics_data_visualisation.md
The documents build and html output is generated, however README.md has links to other markdown documents in the two sub-directories such as the following...
... [this document](./getting_started/setting_your_machine_up.md)...
...which in the translated README.html document the target has not been converted to the translated HTML target as its been recognised as reference external...
...<a class="reference external" href="./getting_started/setting_your_machine_up.md">this document</a>...
...I was half-expecting/hoping it would output as reference internal and convert the file extension approrpiately...
...<a class="reference internal" href="./getting_started/setting_your_machine_up.html">this document</a>...
...so that links worked in the same vein as the Table of Contents does in the sidebar.
Any suggestions as to whether this can be achieved would be appreciated.
Cheers.
EDIT
Trying out the solution suggested by #waylan I have added the following to by conf.py to enable_auto_doc_ref...
def setup(app):
app.add_config_value('recommonmark_config', {
'enable_auto_doc_ref': True,
}, True)
app.add_transform(AutoStructify)
...and on running make html I get the following error.....
❱ cat /tmp/sphinx-err-57rejer3.log
# Sphinx version: 1.8.0
# Python version: 3.6.6 (CPython)
# Docutils version: 0.14
# Jinja2 version: 2.10
# Last messages:
# building [mo]: targets for 0 po files that are out of date
#
# building [html]: targets for 16 source files that are out of date
#
# updating environment:
#
# 16 added, 0 changed, 0 removed
#
# reading sources... [ 6%] README
#
# Loaded extensions:
# sphinx.ext.mathjax (1.8.0) from /home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/ext/math
jax.py
# alabaster (0.7.11) from /home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/alabaster/__init__.py
Traceback (most recent call last):
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/cmd/build.py", line 304, in build_ma
in
app.build(args.force_all, filenames)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/application.py", line 341, in build
self.builder.build_update()
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 347, in
build_update
len(to_build))
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 360, in
build
updated_docnames = set(self.read())
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 468, in
read
self._read_serial(docnames)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 490, in
_read_serial
self.read_doc(docname)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 534, in
read_doc
doctree = read_doc(self.app, self.env, self.env.doc2path(docname))
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/io.py", line 318, in read_doc
pub.publish()
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/docutils/core.py", line 218, in publish
self.apply_transforms()
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/docutils/core.py", line 199, in apply_trans
forms
self.document.transformer.apply_transforms()
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/transforms/__init__.py", line 90, in
apply_transforms
Transformer.apply_transforms(self)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/docutils/transforms/__init__.py", line 171,
in apply_transforms
transform.apply(**kwargs)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 325, in ap
ply
self.traverse(self.document)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 297, in tr
averse
self.traverse(child)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 297, in tr
averse
self.traverse(child)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 297, in tr
averse
self.traverse(child)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 287, in tr
averse
newnode = self.find_replace(c)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 267, in fi
nd_replace
newnode = self.auto_doc_ref(node)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 175, in au
to_doc_ref
return self.state_machine.run_role('doc', content=content)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/states.py", line 134, in run_r
ole
content=content)
TypeError: 'NoneType' object is not callable
I've looked through the last two calls and I think this might be down to content not being set, which may be something to do with my index.rst but I'm really out of my depth here.
The recommonmark documentation suggests enabling AutoStructify by adding the following to your config.py file:
from recommonmark.transform import AutoStructify
github_doc_root = 'https://github.com/rtfd/recommonmark/tree/master/doc/'
def setup(app):
app.add_config_value('recommonmark_config', {
'url_resolver': lambda url: github_doc_root + url,
'auto_toc_tree_section': 'Contents',
}, True)
app.add_transform(AutoStructify)
This will give you the following features:
enable_auto_toc_tree: whether enable Auto Toc Tree feature.
auto_toc_tree_section: when enabled, Auto Toc Tree will only be enabled on section that matches the title.
enable_auto_doc_ref: whether enable Auto Doc Ref feature.
enable_math: whether enable Math Formula
enable_inline_math: whether enable Inline Math
enable_eval_rst: whether Embed reStructuredText is enabled.
url_resolver: a function that maps a existing relative position in the document to a http link
Of note is the Auto Doc Ref feature:
It is common to refer to another document page in one document. We
usually use reference to do that. AutoStructify will translate these
reference block into a structured document reference. For example
[API Reference](api_ref.md)
will be translated to the AST of following reStructuredText code
:doc:`API Reference </api_ref>`
And it will be rendered as API Reference
Why is this necessary? Because, unlike Rst, Markdown does not have any knowledge of anything outside of the given document and has no support for Rst style directives. Therefore, there is no mechanism to transform a URL.
Instead, AutoStructify waits until after the recommonmark bridge converts the Markdown to Sphinx's underlying document structure (docutils document object), then it runs a series of transformers on it to provide limited Rst like functionality. Even with AutoStructify, you will never get full feature support when using Markdown. That would require Markdown to have native support for directives, which is not likely to ever happen.

pywinauto error argument 4: int too long to convert

I use Python3/pywinauto/and tested app - all are 64.
I got a error when I trying to expend a tree
tree_item = systreeview.GetItem([current_menu_item, u'xxxxxx'])
everything worked with 32 app.
*log:
File "C:\Python36\lib\site-packages\pywinauto\controls\common_controls.py", line 1523, in get_item
texts = [r.text() for r in roots]
File "C:\Python36\lib\site-packages\pywinauto\controls\common_controls.py", line 1523, in <listcomp>
texts = [r.text() for r in roots]
File "C:\Python36\lib\site-packages\pywinauto\controls\common_controls.py", line 960, in text
return self._readitem()[1]
File "C:\Python36\lib\site-packages\pywinauto\controls\common_controls.py", line 1383, in _readitem
remote_mem)
ctypes.ArgumentError: argument 4: <class 'OverflowError'>: int too long to convert*
It was a bug. Fixed now. Thank you everyone.
Fixed another way in pull request #373. pywinauto 0.6.3 is out with the fix.
Just replaced 2 remaining win32functions.SendMessage calls with self.send_message everywhere.

Pinterest API search not working anymore

I was looking for pinterest API endpoints...
I've founded some URL..
https://api.pinterest.com/v3/domains/<domains>/search/pins/?query=<query>&access_token=<access_token>
I was able to generate the access_token..but every time I've tried a POST on that URL it gave me:
{
"status":"failure",
"code":12,
"host":"ngapi2-b2fc674c",
"generated_at":"Mon, 09 Feb 2015 17:45:29 +0000",
"message":"Something went wrong on our end. Sorry about that.",
"data":"path: /v3/domains/www.vtracker.com.br/search/pins/\nparams:
[('access_token', [u'blablahblahblah']), ('query',
[u'como'])]\nTraceback (most recent call last):\n File
\"/mnt/virtualenv/local/lib/python2.7/site-packages/flask/app.py\",
line 1504, in wsgi_app\n response =
self.full_dispatch_request()\n File
\"/mnt/virtualenv/local/lib/python2.7/site-packages/flask/app.py\",
line 1264, in full_dispatch_request\n rv =
self.handle_user_exception(e)\n File
\"/mnt/virtualenv/local/lib/python2.7/site-packages/flask/app.py\",
line 1262, in full_dispatch_request\n rv =
self.dispatch_request()\n File
\"/mnt/virtualenv/local/lib/python2.7/site-packages/flask/app.py\",
line 1248, in dispatch_request\n return
self.view_functions[rule.endpoint](**req.view_args)\n File
\"../api/pin_api.py\", line 715, in __call__\n
self._perform_auth()\n File \"../api/pin_api.py\", line 848, in
_perform_auth\n authorization.perform(dictified_values,
request.cookies, request.headers)\n File \"../api/pin_api.py\",
line 271, in perform\n params, cookies, headers)\n File
\"../api/pin_api.py\", line 121, in perform\n headers=headers)\n
File \"../api/decorators.py\", line 212, in
verify_user_authorization\n
core.Consumer.manager.get_scope_as_int(required_scope)):\n File
\"../core/managers/consumer_manager.py\", line 479, in
check_scope\n scope = migrate_legacy_scope(scope)\n File
\"../core/managers/consumer_manager.py\", line 475, in
migrate_legacy_scope\n if ~scope & old == 0:\nTypeError: bad
operand type for unary ~: 'NoneType'\n"
}
Is Pinterest API v3 closed or some other problem is going on?
Thks
You must first ensure that your app has been approved by Pinterest. You may need to reapply for approval (I had to reapply for my app). Once you are approved, you will see a link on your app page called "Visit API docs". This will link to the V3 documentation (https://developers.pinterest.com/docs/redoc/pinner_app). At least, this is the documentation I have been given access to. If you have a different type of app, maybe you will have access to other documentation.
After your app has been approved, the section of the documentation you will be interested in is "Search user pins" (https://developers.pinterest.com/docs/redoc/pinner_app/#tag/search).
The endpoint is: https://api.pinterest.com/v3/search/user_pins/{user}/
The documentation provides details about the query parameters that are allowed and the response data.

Resources