If you don't understand consciousness, how to make it from first principles and how it works, then I don't think you can confidently say "this isn't conscious" about much.
We don't need to lean on consciousness nor other mysteries at all. Nor we do have to when a rock changes color as it gets wet.
And without this parsimony, then we could claim that any unexplained mystery underlies any well-understood phenomenon which doesn't sound like much of an epistemic standard.
Ultimately it's a bit of an inprecise human concept. The boundaries of what fits in there might be somewhat unclear, but we definitely things that intuitively are (humans) and aren't (plants, rocks) in this set.
But if I cut myself, no amount of science can currently assess how much pain I feel or how much it bothers me.
You can observe that a human and a record player can both say "hello", but you can not make the argument from that that there is no way to disprove that a record player might wish to express a greeting to a fellow being.
A simple process can duplicate the outward appearance and effect of a complex one (an mp3 player can talk), and a complex process can duplicate the outward appearance and effect of a simple one (a human can crank a drive shaft), and neither of these means that one might just as well be the other. They don't mean anything at all by themselves either for proving or disproving.
Humans reacting to stimuli in largely similar ways to a plant, or even plain physical process like water filling a vessel or diffusion, neither proves nor disproves, nor even merely implies or suggests, nor even merely opens any doors to any room for doubts about anything.
It could be that there is no fundamental difference between a human and a plant and a toaster, but this observation about similar behavior provides nothing towards the argument.
Perhaps "consciousness" is just a poor term to use in a scientific discussion.
The same for a plant; if you cut it, science won't tell you how much pain it feels, or how much it's bothered by your act of violence.
If we're going to agree on anything, I just wish consciousness discussions could agree on some phenomenological referent(s) for the term "consciousness". The word is used in a way that is little more than a sed-replace for elan vital, regaling all discourse to little more than a volley of solipsistic value proclamations IMHO.
mr jc-bose explored exactly these ideas more than a century ago !
Isn't the capability of dreaming and simulating situations in your head the definition of consciousness?
Plenty of AIs are capable of something very much alike to "dreaming and simulating situations in your head" too. Humans really hate the idea of AIs being conscious, so surely that means dreaming can't be in any way important for determining whether something is conscious or not.
I find some irony in the mention of elan vital upthread - on the one hand, most people here wouldn't let themselves be caught dead believing in elan vital, but then switch to any thread discussing AI, or even cognition in animals (or plants, like here), and suddenly vitalism becomes the mainstream position once again.
In the plant case, among many other interesting things (fungal interactions, say) I think of examples like: How roots grow towards water/nutrients (even if rocks are in the way). How leaves/branches can lean-or-orient-towards and grow-towards sunlight. The kind of self-healing that this article describes, and the overall way in which everything a plant does tends to lead towards it being able to reproduce via seeds or spores (or one of the many other ways that plants can reproduce!).
I'll freely admit my perspective on this is influenced a lot by Michael Levin's work around hierarchies of agency (many vids on YouTube) In many of his talks he describes how agency can be treated as a measurable quantity, like something from engineering. This places it far from philosophical or abstract definitions and more in a cybernetic realm where agency is a measurable thing and can either be instructed from outside (like a thermostat) or from within - like when a hungry animal (or plant) seeks food - and as you go up the scale-level for agency you get entities that can legitimately have (and achieve) clear longer-term (and spatially larger) goals. Smart animals - such as humans - can even with a mostly-straight-face talk about and put into place plans that would reach far into the future compared to their own expected lifetime.
So yeah I agree plants certainly can't comprehend BBC2, but I think if that's the definition of agency then we're really not talking about the same thing.
I agree that plants worry about light and water, but they don't just sit there and do nothing! They respond to it with their agency, maybe it's an agency we find hard to recognize and don't fully appreciate and we'll often say it is small/slow or disregard it, but I personally think that plant agency is made of the same stuff as human agency (and we just have a special word for it when it's us: consciousness).