Hacker Newsnew | past | comments | ask | show | jobs | submit | killerstorm's commentslogin

People did the calculation: radiative cooling requires smaller surface area than solar panels. So, basically, a solar panel itself can radiate heat.

Have you done a calculation yourself?


How can the solar panel itself radiate heat when it's being heated up generating supplying power? Looking at pictures of the ISS there's radiators that look like they're there specifically to cool the solar panels.

And even if viable, why would you just not cool using air down on earth? Water is used for cooling because it increases effectiveness significantly, but even a closed loop system with simple dry air heat exchangers is quite a lot more effective than radiative cooling


You take the amount of energy absorbed by the solar panels and subtract the amount they radiate. Most things in physics are linear systems that work like this.

The same radiative (radiant) cooling on Earth works almost just as well, but without the cost of a rocket launch.

Anthropic added features like this into 4.5 release:

https://claude.com/blog/context-management

> Context editing automatically clears stale tool calls and results from within the context window when approaching token limits.

> The memory tool enables Claude to store and consult information outside the context window through a file-based system.

But it looks like nobody has it as a part of an inference loop yet: I guess it's hard to train (i.e. you need a training set which is a good match for what people use context in practice) and make inference more complicated. I guess more high-level context management is just easier to implement - and it's one of things which "GPT wrapper" companies can do, so why bother?


> Is any of this standardization really needed?

This standardization, basically, makes a list of docs easier to scan.

As a human, you have a permanent memory. LLMs don't have it, they have to load it into the context, and doing it only as necessary can help.

E.g. if you had anterograde amnesia, you'd want everything to be optimally organized, labeled, etc, right? Perhaps an app which keeps all information handy.


Everybody wants that, though, no? At least some of the time?

For example, if you've just joined a new team or a new project, wouldn't you like to have extensive, well-organised documentation to help get you started?

This reminds me of the "curb-cut effect", where accommodations for disabilities can be beneficial for everybody: https://front-end.social/@stephaniewalter/115841555015911839


inb4 people re-discover RAG, re-branding it as a parallel speculative data lookup

I think "internet" needs a shared reputation & identity layer - i.e. if somebody offers a comment/review/contribution/etc, it should be easy to check - what else are their contributing, who can vouch for them, etc.

Most of innovation came from web startups who are just not interest in "shared" anything: they want to be a monopoly, "own" users, etc. So this area has been neglected, and then people got used to status quo.

PGP / GPG used to have web-of-trust but that sort of just died.

People either need to resurrect WoT updated for modern era, or just accept the fact that everything is spammed into smithereens. Blaming AI and social media does not help.


Well, obviously, `npm` has the same destructive power: package might include a script which steals secrets or wipes a hard drive. But people just assume that usually they don't.


I don't believe this would be more efficient.

Use of common tools like `ls` and file patching is already baked into model's weights, it can do that with minimal amount of effort, leaving more room for actually thinking about app's code.

If you force it to wrap these actions into non-standard tools you're basically distracting the model: it has to think about app-code and tool-code in the same context.

In some cases it does make sense to encourage the model to create utilities for itself - but you can do that without enforcing code-only.


It doesn’t matter if it’s less efficient, what matters is that it has more chances to verify and get it right. It’s hard to rollback a series of tool calls. It’s easier to revert state and rerun a complete piece of code until you get the desired result.


I don't think "efficency" is at all the point? At all?

It's safety, reliability, and human understanding -- and like OOP, for example, are often directly at odds with "efficiency."


We should differentiate AI models from AI apps.

Models just generate text. Apps are supposed to make that text useful.

An app can run various kinds of verification. But would you pay an extra for that?

Nobody can make a text generator to output text which is 100% correct. That's just not a thing people can do now.


True.

Also true that most tech writers are bad. And companies aren't going to spend >$200k/year on a tech writer until they hit tens of millions in revenue. So AI fills the gap.

As a horror story, our docs team didn't understand that having correct installation links should be one of their top priorities. Obviously if a potential customer can't install product, they'd assume it's bs and try to find an alternative. It's so much more important than e.g. grammar in a middle of some guide.


Consider hypothetical scenario: some present in the environment toxin is causing migraine symptoms.

A doctor following diagnostic criteria might assign "migraine" diagnosis and provide standard recommendations for migraine management.

Another doctor seeing a quick uptick of patients with migraine symptoms will try to investigate toxins and infections.

Which doctor is doing something useful here?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: