tl;dr: Tested GPT-5 models (nano, mini, full) generating pelican-on-bicycle SVGs. GPT-5 nano: $0.0038. GPT-5 mini: $0.0054. GPT-5 full: $0.040.

I watched the livestream of the GPT-5 release yesterday, and besides some strange charts the presentation was pretty solid.
The next thing I looked over the review of Simon Willison, as he is one of the few people who already experimented with the model before release. I particularly enjoy his tests with the Generate an SVG of a pelican riding a bicycle prompt. Since the release of the model in ChatGPT will take some time I decided to go for the same test using the API.
I wanted to test this directly in my dify instance. There are two options here, either I manually add the models in dify (which means I need to find my API key or create a new one as the field is required in the form) or I contribute the extension of GPT 5 to the dify plugin. I chose the second option and created a pull request which was merged very fast. Another community member followed with a fix pr, as it seems that this series of models has no more temperature parameter.
This allowed me to interact with the GPT-5-nano model but I had issues with the others:
Run failed: [models] Bad Request Error, Error code: 400 - {'error': {'message': 'Your organization must be verified to stream this model. Please go to: https://platform.openai.com/settings/organization/general and click on Verify Organization. If you just verified, it can take up to 15 minutes for access to propagate.', 'type': 'invalid_request_error', 'param': 'stream', 'code': 'unsupported_value'}}
It seems that this validation process is in place for some time. After I did the validation I was also able to access gpt-5-mini and gpt-5.
This means I can now generate pelicans on bicycles, so here they are:
gpt-5-nano
"prompt_tokens": 18,
"prompt_unit_price": "0.05",
"prompt_price_unit": "0.000001",
"prompt_price": "9E-7",
"completion_tokens": 9386,
"completion_unit_price": "0.4",
"completion_price_unit": "0.000001",
"completion_price": "0.0037544",
"total_tokens": 9404,
"total_price": "0.0037553",
"currency": "USD",
"latency": 40.14214263856411
gpt-5-mini
"prompt_tokens": 18,
"prompt_unit_price": "0.25",
"prompt_price_unit": "0.000001",
"prompt_price": "0.0000045",
"completion_tokens": 2692,
"completion_unit_price": "2",
"completion_price_unit": "0.000001",
"completion_price": "0.005384",
"total_tokens": 2710,
"total_price": "0.0053885",
"currency": "USD",
"latency": 25.86077081412077
gpt-5
"prompt_tokens": 18,
"prompt_unit_price": "1.25",
"prompt_price_unit": "0.000001",
"prompt_price": "0.0000225",
"completion_tokens": 4036,
"completion_unit_price": "10",
"completion_price_unit": "0.000001",
"completion_price": "0.04036",
"total_tokens": 4054,
"total_price": "0.0403825",
"currency": "USD",
"latency": 75.47893614694476
I find it interesting how the models behave, and the fact that the smaller models generated the image with a background, while the gpt-5 image has no background. Also here is how the models compare in terms of latency, speed and price:
The latency results reveal an interesting pattern: GPT-5 nano took 40 seconds, GPT-5 mini 26 seconds, and GPT-5 full requires 75 seconds. Counter-intuitively, the mini model is actually faster than nano, likely because it generates shorter answers (2,692 vs 9,386 tokens). The full model takes the longest but produces the most refined output.