F1 computational restrictions.

Post here all non technical related topics about Formula One. This includes race results, discussions, testing analysis etc. TV coverage and other personal questions should be in Off topic chat.
User avatar
Zynerji
110
Joined: 27 Jan 2016, 16:14

F1 computational restrictions.

Post

So, we know there are CFD limitations in place (sad as the horsepower per watt is always getting better). Maybe there should be a Watt restriction instead of a TFLOP limit.🤔 Like the fuel tank limit on the cars instead of a flow limit.

Anyway. How is the crowd here feeling about the introduction of AI? With the open availability of the LLM models, I'm guessing that the teams are now all running GPT-4ish nodes, and using it to analyze data, race outcomes, pit strategies, budgets, etc.

Does anyone think this might be the new war zone of F1? Do fans want to watch 10 game-theory supercomputers calling all the shots of a race weekend? Should this be addressed in the rules and limited?

I am personally torn on this. The sheer optimization and efficiency that these systems will bring make me excited (I'm currently using them professionally for this purpose), but the near-total loss of human passion and experience that would be put to the side makes me utterly sad.

I can just hear the interviews where the teams blame the AI for bad calls and such... I hope that it never gets to that seemingly-inevitable point.😪

dialtone
dialtone
107
Joined: 25 Feb 2019, 01:31

Re: F1 computational restrictions.

Post

IMHO Not a chance teams are running any LLM for anything critical:

* Training one is an insane cost, comparable to RBR catering costs per model trained, $4mil, 2048 A100 GPUs and 1 month to train LLaMa which only has 65B parameters
* Running one is equally nuts on costs, especially at scale.
* When it comes to really advanced topics, like math or physics, they are usually very wrong and you can eventually fix it with enough iterations on the answers. (https://www.reddit.com/r/ChatGPT/commen ... d_in_math/ , https://cs.stanford.edu/~knuth/chatGPT20.txt )
* Lastly if they use the open ai product there is no expectation of privacy of the inputs, something that is a big no no for any team (Apple for example has forbidden its workforce from using it)

They could use something simple like LLaMa but that's not really very good.

The technology of this is evolving so rapidly that every few weeks at the moment there's a new discovery: bit quantization on LLaMa model to shrink it, or a new way to architect the model 4 days ago (https://www.artisana.ai/articles/meta-a ... chitecture), without necessarily going in the security issues like prompt injection that are still being explored and have no real good practical solution yet.

LLM are language models, specialized tools from the teams will outperform LLMs except for open ended search tasks. For example they could use one of those models to test ideas provided they have a model trained on aero physics but it's entirely possible it spits out gibberish because it's a language model, not a math model.

Even with all this the tech behind LLM is insane and amazing.

EDIT: of course programming software, being something that uses a language, is actually a good candidate for LLM use and maybe teams can use LLMs to develop software more quickly to evaluate strategies and such.

User avatar
Zynerji
110
Joined: 27 Jan 2016, 16:14

Re: F1 computational restrictions.

Post

What do they do with their CFD computers after they use their allotment? They don't use them for non-CFD team enhancement?

CFD hardware is nearly identical to AI hardware. Things like AutoGPT, wolfram alpha plugins for math/physics, tree of thought modeling as well as multi-agent GPT where it spawns helpers.

Running a local node to crunch through team data has no security risks...

I feel this is closer than you think.

dialtone
dialtone
107
Joined: 25 Feb 2019, 01:31

F1 computational restrictions.

Post

AFAIK the limit is 25 TFlops for CFD F1 simulations. A single A100 has 312 TFlops and FB used 2048 of those to develop their model, we’re taking about 5 orders of magnitude more power for a small and simple LLM model.

It might be close but IMHO none of the teams have the knowledge in house for this, let alone the budget to spend in developing and researching highly changing systems like LLMs. Simply not their business.

Edit: seems like they moved from TFlops to MAUh or Mega Allocation Units per hour. At the end of the say these are systems optimized for efficiency, where F1 teams look for 100% utilization of each core all the time and typically talking about 200 or so CPUs. These systems are small and cost a fraction of what AI training costs. Best chance they have is to use AWS.

User avatar
Zynerji
110
Joined: 27 Jan 2016, 16:14

Re: F1 computational restrictions.

Post

dialtone wrote:
28 May 2023, 02:53
AFAIK the limit is 25 TFlops for CFD F1 simulations. A single A100 has 312 TFlops and FB used 2048 of those to develop their model, we’re taking about 5 orders of magnitude more power for a small and simple LLM model.

It might be close but IMHO none of the teams have the knowledge in house for this, let alone the budget to spend in developing and researching highly changing systems like LLMs. Simply not their business.

Edit: seems like they moved from TFlops to MAUh or Mega Allocation Units per hour. At the end of the say these are systems optimized for efficiency, where F1 teams look for 100% utilization of each core all the time and typically talking about 200 or so CPUs. These systems are small and cost a fraction of what AI training costs. Best chance they have is to use AWS.
CAD, CAM, CFD and PLM software wasn't their business at a certain point as well.

The AI and its ability to sift oceans of data is here. Ignoring it doesn't make sense.

Maybe Sam Collins can do a Tech Talk on AI systems in F1...🤔

User avatar
Zynerji
110
Joined: 27 Jan 2016, 16:14

Re: F1 computational restrictions.

Post

https://hackaday.com/2023/05/28/ai-creates-killer-drug/

Not a large step to think this could be done with aero or mechanical generative design.

dialtone
dialtone
107
Joined: 25 Feb 2019, 01:31

Re: F1 computational restrictions.

Post

That's a whole different problem than designing a surface, and actually not sure why it's news since it's basically how biotech has been working for years.

But again the issue in F1 is the amount of computation power available, and that unless someone sells a packaged software for F1 teams it's unlikely they'll use it, it's not their skillset.

User avatar
hollus
Moderator
Joined: 29 Mar 2009, 01:21
Location: Copenhagen, Denmark

Re: F1 computational restrictions.

Post

Zynerji wrote:
28 May 2023, 18:19
https://hackaday.com/2023/05/28/ai-creates-killer-drug/

Not a large step to think this could be done with aero or mechanical generative design.
As Dialtone said. That is what biotech has done for 30 years or more, throw CPU at the problem to reduce the number of candidates, then test those.
Getting about 3 percent right… good for headlines, but not mature yet.

Maybe predicting the real moment when rain will start, from live radar images, would be a better use if neural networks right now. Today just about every team botched it.

But it is only a matter of time, of course, until it starts to percolate into other areas of F1.
Rivals, not enemies.

beschadigunc
beschadigunc
4
Joined: 01 Nov 2021, 22:44

Re: F1 computational restrictions.

Post

hollus wrote:
28 May 2023, 22:27
Zynerji wrote:
28 May 2023, 18:19
https://hackaday.com/2023/05/28/ai-creates-killer-drug/

Not a large step to think this could be done with aero or mechanical generative design.
As Dialtone said. That is what biotech has done for 30 years or more, throw CPU at the problem to reduce the number of candidates, then test those.
Getting about 3 percent right… good for headlines, but not mature yet.

Maybe predicting the real moment when rain will start, from live radar images, would be a better use if neural networks right now. Today just about every team botched it.

But it is only a matter of time, of course, until it starts to percolate into other areas of F1.
No need to underestimate the exponential growth, I assume in few years we will get there. Though if every team used AGI or some kinda specialized ASI for aero design all the cars would converge to the limit very quickly. Therefore AI must be banned for design jobs (not about cfd speed predicting tools) to keep the Formula 1 human and therefore a art esq design competition.

User avatar
Zynerji
110
Joined: 27 Jan 2016, 16:14

Re: F1 computational restrictions.

Post

Early controls before it becomes a problem is exactly why I started this thread.

And FYI. No one uses CPU for these computations. They have been GPU accelerated since OpenCL launched in 2009...

dialtone
dialtone
107
Joined: 25 Feb 2019, 01:31

Re: F1 computational restrictions.

Post

Zynerji wrote:Early controls before it becomes a problem is exactly why I started this thread.

And FYI. No one uses CPU for these computations. They have been GPU accelerated since OpenCL launched in 2009...
My man… they are computationally limited. They can use whatever they want so long as they don’t cross the limit.

https://www.boston.co.uk/solutions/workload/cfd.aspx

https://aws.amazon.com/solutions/case-s ... graviton2/

These are both primarily CPU case studies. C6g instances from amazon don’t come with GPUs.

Primary problem is that GPUs have limited memory, an A100 GPU costs $20k i think with 80GB of memory. If you need 1TB for the simulation that’s a lot of GPUs that then you need to mostly turn off because of computational limits anyway.

There’s then other considerations but it’s not quite the slam dunk for GPUs, although they are going to be the better solution and I suppose the stuff from Ansys is talked about very well.

dialtone
dialtone
107
Joined: 25 Feb 2019, 01:31

Re: F1 computational restrictions.

Post

Here's more resources on the topic of CPU v GPU in CFD:

https://www.cfd-online.com/Forums/hardw ... e-wip.html
https://www.cfd-online.com/Forums/hardw ... luent.html

Overall, and it shouldn't come as a surprise, even if you use GPUs you need to feed them as fast as you can so ultimately you also need extremely high end CPUs.

Of particular importance is the efficiency that you target with these systems, having an A100 for 400+ TFlops that however my software can only use well up to a point means that I'm wasting away TFlops that FIA is going to regulate me for. Better would be to buy a system that I can use 100% and that scales linearly with the added parallelism, the 2 links above for example talk about ideal ratios between cores and memory bandwidth and how with CPUs with 96 cores you are wasting 80 of them.

Greg Locock
Greg Locock
233
Joined: 30 Jun 2012, 00:48

Re: F1 computational restrictions.

Post

ChatGPT took 6-10 iterations to write a sim of a ball bouncing elastically off a rigid floor. I had to correct it for every iteration (because I knew what the answer was). At every point it justified the code it gave.

User avatar
Zynerji
110
Joined: 27 Jan 2016, 16:14

Re: F1 computational restrictions.

Post

Greg Locock wrote:
29 May 2023, 02:02
ChatGPT took 6-10 iterations to write a sim of a ball bouncing elastically off a rigid floor. I had to correct it for every iteration (because I knew what the answer was). At every point it justified the code it gave.
Using AutoGPT with the GPT4 api yet?

User avatar
Zynerji
110
Joined: 27 Jan 2016, 16:14

Re: F1 computational restrictions.

Post

dialtone wrote:
29 May 2023, 00:33
Zynerji wrote:Early controls before it becomes a problem is exactly why I started this thread.

And FYI. No one uses CPU for these computations. They have been GPU accelerated since OpenCL launched in 2009...
My man… they are computationally limited. They can use whatever they want so long as they don’t cross the limit.

https://www.boston.co.uk/solutions/workload/cfd.aspx

https://aws.amazon.com/solutions/case-s ... graviton2/

These are both primarily CPU case studies. C6g instances from amazon don’t come with GPUs.

Primary problem is that GPUs have limited memory, an A100 GPU costs $20k i think with 80GB of memory. If you need 1TB for the simulation that’s a lot of GPUs that then you need to mostly turn off because of computational limits anyway.

There’s then other considerations but it’s not quite the slam dunk for GPUs, although they are going to be the better solution and I suppose the stuff from Ansys is talked about very well.
What limits are you talking about? Are you saying a GPT4 node would count towards their CFD limits?