Monday 8 June 2009

Thinking about the Synoptic problem from a systems approach at the weekend, I came up with a relatively simple model:

For each pair of extant documents there is a separate synoptic problem. The overall synoptic problem is the combination of these individuals.

Each pair of documents (call them A and B) sits on a continuum.

At one extreme B copied directly from A, possibly omitting some material and adding additional content.

At the other extreme A copied directly from B, again with possible omissions and additions.

In the middle of the continuum, A and B are independent and their agreements are due to some additional source.



These three cases are black-and-white and easy to draw pictures of. Filling in the continuum are other scenarios. Maybe A and B had some common sources (they'd both heard similar sermons from visiting preachers, they'd both got access to some of the same letters), and B copied from A. Now there's a blend in the relationship between both extremes of the continuum. I've tried to show this in shades of grey in the diagram.

Assuming there are 3 pairs of synoptic gospel texts - the statistical analysis of the synoptic problem should provide a point on the continuum for each of the three pairs. And (more importantly) an estimate of the error for each. It may be that the relationship between Matthew and Luke, for example, is placed somewhere between a common source (Q hypothesis) and Luke copying Matthew (Farrer hypothesis), with error margins that are significant enough that both sides can claim victory.

Would they? Or would a died in the wool Q fan think of anything less than smack in the middle as being a contradiction? From what I've read most researchers seem to pay only lip-service to the possibility of the solution being a shade of grey.

0 Comments:

Post a Comment