Device analysis

Now we will run an FSR analysis on the device data we uploaded in the previous notebook.

As before, make sure you have the following environment variables set or added to a .env file:

GDSFACTORY_HUB_API_URL="https://{org}.gdsfactoryhub.com"
GDSFACTORY_HUB_QUERY_URL="https://query.{org}.gdsfactoryhub.com"
GDSFACTORY_HUB_KEY="<your-gdsfactoryplus-api-key>"
import getpass

from tqdm.auto import tqdm

import gdsfactoryhub as gfh
project_id = f"rings-{getpass.getuser()}"
client = gfh.create_client_from_env(project_id=project_id)
api = client.api()
query = client.query()
utils = client.utils()

Device analysis

You can either trigger analysis automatically by defining it in the design manifest, using the UI or using the Python DoData library.

from gdsfactoryhub.functions.device_data import fsr

fsr.run?

You can easily get a device pkey to try your device analysis:

device_data_pkey = query.device_data().execute().data[0]["pk"]
fsr.run(
    device_data_pkey,
    xname="wavelength",
    yname="output_power",
    peaks_prominence=0.01,
);

png

# don't error out when function already exists in DoData.
with gfh.suppress_api_error():
    result = api.upload_function(
        function_id="fsr",
        target_model="device_data",
        file=gfh.get_module_path(fsr),
        test_target_model_pk=device_data_pkey,
        test_kwargs={
            "xname": "wavelength",
            "yname": "output_power",
            "peaks_prominence": 0.01,
        },
    )
Duplicate function

Trigger all analyses for all device data

results = []
dd_pks = [d["pk"] for d in query.device_data().execute().data]
for dd_pk in tqdm(dd_pks):
    with gfh.suppress_api_error():
        result = api.start_analysis(
            analysis_id=f"device_fsr_{dd_pk}",
            function_id="fsr",
            target_model="device_data",
            target_model_pk=dd_pk,
            kwargs={
                "xname": "wavelength",
                "yname": "output_power",
                "peaks_prominence": 0.01,
            },
        )
        results.append(result)
  0%|          | 0/576 [00:00<?, ?it/s]

Let's have a look at the last analysis:

analysis_pks = [r["pk"] for r in results]
utils.analyses().wait_for_completion(pks=analysis_pks)
analyses = query.analyses().in_("pk", analysis_pks).execute().data
succesful_analyses = [a for a in analyses if a["status"] == "COMPLETED"]
analysis = succesful_analyses[-1]
img = api.download_plot(analysis["summary_plot"]["path"])
img.resize((530, 400))
Waiting for analyses:   0%|          | 0/576 [00:00<?, ?it/s]

png

On This Page