Join the Neopoet online poetry workshop and community to improve as a writer, meet fellow poets, and showcase your work. Sign up, submit your poetry, and get started.
----------hey, I'm bloggin' here
Aside from constructive criticism and lavish praise from those who comment on our work, we also want to know where we stand among the other poets here. We want a rating, a place in a pecking order----even those of us who never peck want to know whom to dodge, or whose poetic opinions weigh more. There is in this an obvious parallel with IQ, a number that we create through a questionable procedure, but that we take as a fair approximation of our intellectual pecking order, within some bounds.
I believe it is possible to create our own questionable procedure to create a similarly acceptable measure of poetic horsepower, through a system that compares the works of poets and evaluates them through a simple zero-sum award system. I believe this will work despite that the many possibly axes of comparison that are employed by the critics/judges who are poets themselves are a very mixed and disorderly bag.
The system I have in mind would run like this:
A member would log in, and be shown two poems, side by side (bare of byline and poet's comments initially, these would be revealed later, if desired). The member would then click on the poem to the left, the poem to the right, or on the mark between them to indicate 'the superior poem,' or to indicate 'poems I take to be equals.' Then, immediately, a list of tags and a text box would appear; the member could click on one of the tags already in place, or could use the text box to create a tag, naming the criterion that caused the member to reach his/her critical decision, or the member could simply hit the return bar leaving the routine (a default 'uninteresting comparison' tag has the same effect).
That's it. Numbers will be generated, criteria will emerge, and with each login, we extract a small sweat-fee from each member.
The system would NOT replace the currently operative system---it is intended to supply some objective measure of our poetry. Of course there would remain the question of what was being measured, but, incontestably, it would measure the relative standing of two poems from somebody's point of view every frickin' time.
As the numbers accrue, it becomes possible to find patterns of criticism. Eventually, one could have a degree of certainty that a poem's current rating, e.g., '103 of a possible 500' (assuming the poem has been 'weighed' 250 times), holds this meaning at least, that the poem kinda sucks in comparison to other works.
I could live with that.
----------
The argument will come up inevitably: you can't compare apples and ursine penile bones!
Well, obviously you can compare any two objects. A housefly is smaller than a can of coffee; I chose to compare them on size. You might choose to make the same comparison differently, say, 'a housefly is livelier than a can of coffee.' There is not much that can be done with just those two comparisons---but it is not the case that we've wasted time. We have, for instance, discovered that 'houseflies and cans of coffee are comparable by size or liveliness.' That is not a bad thing to know. When the heap of comparisons with their 'parameters of comparison' is large enough, we may find that we have quite a lot of data/observations that are convertible to knowledge.
Houseflies and coffee cans are alike in their ability to wage war on Atlantis.
----------implications of the comparisons: a thought experiment
A poem that goes head to head starts out with a rating of 0/0. It has no points, and for good cause: it has been in no contests. After its first match, it finds that it is inferior on 'adherence to form' in comparison to the sonnet it went up against. (That blow is softened somewhat on the revelation that the opposing poem was Shakespeare's Sonnet 30). The case poem is now rated 0/2; Sonnet 30 is now rated 9845/10604.
The next matchups result in a few wins, a few more losses, and a considerable number of draws. The case poem is at 6/24. Criteria mentioned: 'adherence to form,' 'better flow' (3 times), 'intuition' (5 times), 'efficacy with regard to purpose,' 'melody', and 'less distasteful theme.' Analysis of individual matchups shows that when 'intuition' is the criterion, the case poem is 4/6---so no conclusion reached there is reliable.
Yeah, that'll work. If the case poem were 70/100 on 'better flow,' we could be pretty confident that the poem was superior on that axis; and if 'flow' is the critical mode, the criterion most often cited, we'd know something more, wouldn't we? Still nothing to write a lengthy dissertation about.
----------hey, I'm bloggin' here
Aside from constructive criticism and lavish praise from those who comment on our work, we also want to know where we stand among the other poets here. We want a rating, a place in a pecking order----even those of us who never peck want to know whom to dodge, or whose poetic opinions weigh more. There is in this an obvious parallel with IQ, a number that we create through a questionable procedure, but that we take as a fair approximation of our intellectual pecking order, within some bounds.
I believe it is possible to create our own questionable procedure to create a similarly acceptable measure of poetic horsepower, through a system that compares the works of poets and evaluates them through a simple zero-sum award system. I believe this will work despite that the many possibly axes of comparison that are employed by the critics/judges who are poets themselves are a very mixed and disorderly bag.
The system I have in mind would run like this:
A member would log in, and be shown two poems, side by side (bare of byline and poet's comments initially, these would be revealed later, if desired). The member would then click on the poem to the left, the poem to the right, or on the mark between them to indicate 'the superior poem,' or to indicate 'poems I take to be equals.' Then, immediately, a list of tags and a text box would appear; the member could click on one of the tags already in place, or could use the text box to create a tag, naming the criterion that caused the member to reach his/her critical decision, or the member could simply hit the return bar leaving the routine (a default 'uninteresting comparison' tag has the same effect).
That's it. Numbers will be generated, criteria will emerge, and with each login, we extract a small sweat-fee from each member.
The system would NOT replace the currently operative system---it is intended to supply some objective measure of our poetry. Of course there would remain the question of what was being measured, but, incontestably, it would measure the relative standing of two poems from somebody's point of view every frickin' time.
As the numbers accrue, it becomes possible to find patterns of criticism. Eventually, one could have a degree of certainty that a poem's current rating, e.g., '103 of a possible 500' (assuming the poem has been 'weighed' 250 times), holds this meaning at least, that the poem kinda sucks in comparison to other works.
I could live with that.
----------
The argument will come up inevitably: you can't compare apples and ursine penile bones!
Well, obviously you can compare any two objects. A housefly is smaller than a can of coffee; I chose to compare them on size. You might choose to make the same comparison differently, say, 'a housefly is livelier than a can of coffee.' There is not much that can be done with just those two comparisons---but it is not the case that we've wasted time. We have, for instance, discovered that 'houseflies and cans of coffee are comparable by size or liveliness.' That is not a bad thing to know. When the heap of comparisons with their 'parameters of comparison' is large enough, we may find that we have quite a lot of data/observations that are convertible to knowledge.
Houseflies and coffee cans are alike in their ability to wage war on Atlantis.
----------implications of the comparisons: a thought experiment
A poem that goes head to head starts out with a rating of 0/0. It has no points, and for good cause: it has been in no contests. After its first match, it finds that it is inferior on 'adherence to form' in comparison to the sonnet it went up against. (That blow is softened somewhat on the revelation that the opposing poem was Shakespeare's Sonnet 30). The case poem is now rated 0/2; Sonnet 30 is now rated 9845/10604.
The next matchups result in a few wins, a few more losses, and a considerable number of draws. The case poem is at 6/24. Criteria mentioned: 'adherence to form,' 'better flow' (3 times), 'intuition' (5 times), 'efficacy with regard to purpose,' 'melody', and 'less distasteful theme.' Analysis of individual matchups shows that when 'intuition' is the criterion, the case poem is 4/6---so no conclusion reached there is reliable.
Yeah, that'll work. If the case poem were 70/100 on 'better flow,' we could be pretty confident that the poem was superior on that axis; and if 'flow' is the critical mode, the criterion most often cited, we'd know something more, wouldn't we? Still nothing to write a lengthy dissertation about.
----------hey, I'm bloggin' here